Dec 15 09:56:09 localhost kernel: Linux version 5.14.0-648.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025
Dec 15 09:56:09 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec 15 09:56:09 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 15 09:56:09 localhost kernel: BIOS-provided physical RAM map:
Dec 15 09:56:09 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 15 09:56:09 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 15 09:56:09 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 15 09:56:09 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec 15 09:56:09 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec 15 09:56:09 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 15 09:56:09 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 15 09:56:09 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec 15 09:56:09 localhost kernel: NX (Execute Disable) protection: active
Dec 15 09:56:09 localhost kernel: APIC: Static calls initialized
Dec 15 09:56:09 localhost kernel: SMBIOS 2.8 present.
Dec 15 09:56:09 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec 15 09:56:09 localhost kernel: Hypervisor detected: KVM
Dec 15 09:56:09 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 15 09:56:09 localhost kernel: kvm-clock: using sched offset of 3206241750 cycles
Dec 15 09:56:09 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 15 09:56:09 localhost kernel: tsc: Detected 2800.000 MHz processor
Dec 15 09:56:09 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 15 09:56:09 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 15 09:56:09 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec 15 09:56:09 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec 15 09:56:09 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 15 09:56:09 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec 15 09:56:09 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec 15 09:56:09 localhost kernel: Using GB pages for direct mapping
Dec 15 09:56:09 localhost kernel: RAMDISK: [mem 0x2d46a000-0x32a2cfff]
Dec 15 09:56:09 localhost kernel: ACPI: Early table checksum verification disabled
Dec 15 09:56:09 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec 15 09:56:09 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 15 09:56:09 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 15 09:56:09 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 15 09:56:09 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec 15 09:56:09 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 15 09:56:09 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 15 09:56:09 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec 15 09:56:09 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec 15 09:56:09 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec 15 09:56:09 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec 15 09:56:09 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec 15 09:56:09 localhost kernel: No NUMA configuration found
Dec 15 09:56:09 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec 15 09:56:09 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Dec 15 09:56:09 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec 15 09:56:09 localhost kernel: Zone ranges:
Dec 15 09:56:09 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 15 09:56:09 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 15 09:56:09 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec 15 09:56:09 localhost kernel:   Device   empty
Dec 15 09:56:09 localhost kernel: Movable zone start for each node
Dec 15 09:56:09 localhost kernel: Early memory node ranges
Dec 15 09:56:09 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 15 09:56:09 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec 15 09:56:09 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec 15 09:56:09 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec 15 09:56:09 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 15 09:56:09 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 15 09:56:09 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec 15 09:56:09 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Dec 15 09:56:09 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 15 09:56:09 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 15 09:56:09 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 15 09:56:09 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 15 09:56:09 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 15 09:56:09 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 15 09:56:09 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 15 09:56:09 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 15 09:56:09 localhost kernel: TSC deadline timer available
Dec 15 09:56:09 localhost kernel: CPU topo: Max. logical packages:   8
Dec 15 09:56:09 localhost kernel: CPU topo: Max. logical dies:       8
Dec 15 09:56:09 localhost kernel: CPU topo: Max. dies per package:   1
Dec 15 09:56:09 localhost kernel: CPU topo: Max. threads per core:   1
Dec 15 09:56:09 localhost kernel: CPU topo: Num. cores per package:     1
Dec 15 09:56:09 localhost kernel: CPU topo: Num. threads per package:   1
Dec 15 09:56:09 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec 15 09:56:09 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec 15 09:56:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec 15 09:56:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec 15 09:56:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec 15 09:56:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec 15 09:56:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec 15 09:56:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec 15 09:56:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec 15 09:56:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec 15 09:56:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec 15 09:56:09 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec 15 09:56:09 localhost kernel: Booting paravirtualized kernel on KVM
Dec 15 09:56:09 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 15 09:56:09 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec 15 09:56:09 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec 15 09:56:09 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Dec 15 09:56:09 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Dec 15 09:56:09 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Dec 15 09:56:09 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 15 09:56:09 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64", will be passed to user space.
Dec 15 09:56:09 localhost kernel: random: crng init done
Dec 15 09:56:09 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 15 09:56:09 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec 15 09:56:09 localhost kernel: Fallback order for Node 0: 0 
Dec 15 09:56:09 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec 15 09:56:09 localhost kernel: Policy zone: Normal
Dec 15 09:56:09 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 15 09:56:09 localhost kernel: software IO TLB: area num 8.
Dec 15 09:56:09 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec 15 09:56:09 localhost kernel: ftrace: allocating 49357 entries in 193 pages
Dec 15 09:56:09 localhost kernel: ftrace: allocated 193 pages with 3 groups
Dec 15 09:56:09 localhost kernel: Dynamic Preempt: voluntary
Dec 15 09:56:09 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 15 09:56:09 localhost kernel: rcu:         RCU event tracing is enabled.
Dec 15 09:56:09 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec 15 09:56:09 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Dec 15 09:56:09 localhost kernel:         Rude variant of Tasks RCU enabled.
Dec 15 09:56:09 localhost kernel:         Tracing variant of Tasks RCU enabled.
Dec 15 09:56:09 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 15 09:56:09 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec 15 09:56:09 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 15 09:56:09 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 15 09:56:09 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 15 09:56:09 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec 15 09:56:09 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 15 09:56:09 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec 15 09:56:09 localhost kernel: Console: colour VGA+ 80x25
Dec 15 09:56:09 localhost kernel: printk: console [ttyS0] enabled
Dec 15 09:56:09 localhost kernel: ACPI: Core revision 20230331
Dec 15 09:56:09 localhost kernel: APIC: Switch to symmetric I/O mode setup
Dec 15 09:56:09 localhost kernel: x2apic enabled
Dec 15 09:56:09 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Dec 15 09:56:09 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 15 09:56:09 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Dec 15 09:56:09 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec 15 09:56:09 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec 15 09:56:09 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec 15 09:56:09 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 15 09:56:09 localhost kernel: Spectre V2 : Mitigation: Retpolines
Dec 15 09:56:09 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec 15 09:56:09 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec 15 09:56:09 localhost kernel: RETBleed: Mitigation: untrained return thunk
Dec 15 09:56:09 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 15 09:56:09 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 15 09:56:09 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec 15 09:56:09 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec 15 09:56:09 localhost kernel: x86/bugs: return thunk changed
Dec 15 09:56:09 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec 15 09:56:09 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 15 09:56:09 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 15 09:56:09 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 15 09:56:09 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 15 09:56:09 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec 15 09:56:09 localhost kernel: Freeing SMP alternatives memory: 40K
Dec 15 09:56:09 localhost kernel: pid_max: default: 32768 minimum: 301
Dec 15 09:56:09 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec 15 09:56:09 localhost kernel: landlock: Up and running.
Dec 15 09:56:09 localhost kernel: Yama: becoming mindful.
Dec 15 09:56:09 localhost kernel: SELinux:  Initializing.
Dec 15 09:56:09 localhost kernel: LSM support for eBPF active
Dec 15 09:56:09 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 15 09:56:09 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 15 09:56:09 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec 15 09:56:09 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec 15 09:56:09 localhost kernel: ... version:                0
Dec 15 09:56:09 localhost kernel: ... bit width:              48
Dec 15 09:56:09 localhost kernel: ... generic registers:      6
Dec 15 09:56:09 localhost kernel: ... value mask:             0000ffffffffffff
Dec 15 09:56:09 localhost kernel: ... max period:             00007fffffffffff
Dec 15 09:56:09 localhost kernel: ... fixed-purpose events:   0
Dec 15 09:56:09 localhost kernel: ... event mask:             000000000000003f
Dec 15 09:56:09 localhost kernel: signal: max sigframe size: 1776
Dec 15 09:56:09 localhost kernel: rcu: Hierarchical SRCU implementation.
Dec 15 09:56:09 localhost kernel: rcu:         Max phase no-delay instances is 400.
Dec 15 09:56:09 localhost kernel: smp: Bringing up secondary CPUs ...
Dec 15 09:56:09 localhost kernel: smpboot: x86: Booting SMP configuration:
Dec 15 09:56:09 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec 15 09:56:09 localhost kernel: smp: Brought up 1 node, 8 CPUs
Dec 15 09:56:09 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Dec 15 09:56:09 localhost kernel: node 0 deferred pages initialised in 9ms
Dec 15 09:56:09 localhost kernel: Memory: 7763892K/8388068K available (16384K kernel code, 5795K rwdata, 13916K rodata, 4192K init, 7164K bss, 618228K reserved, 0K cma-reserved)
Dec 15 09:56:09 localhost kernel: devtmpfs: initialized
Dec 15 09:56:09 localhost kernel: x86/mm: Memory block size: 128MB
Dec 15 09:56:09 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 15 09:56:09 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec 15 09:56:09 localhost kernel: pinctrl core: initialized pinctrl subsystem
Dec 15 09:56:09 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 15 09:56:09 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec 15 09:56:09 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec 15 09:56:09 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec 15 09:56:09 localhost kernel: audit: initializing netlink subsys (disabled)
Dec 15 09:56:09 localhost kernel: audit: type=2000 audit(1765792567.799:1): state=initialized audit_enabled=0 res=1
Dec 15 09:56:09 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec 15 09:56:09 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 15 09:56:09 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 15 09:56:09 localhost kernel: cpuidle: using governor menu
Dec 15 09:56:09 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 15 09:56:09 localhost kernel: PCI: Using configuration type 1 for base access
Dec 15 09:56:09 localhost kernel: PCI: Using configuration type 1 for extended access
Dec 15 09:56:09 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 15 09:56:09 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec 15 09:56:09 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec 15 09:56:09 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec 15 09:56:09 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec 15 09:56:09 localhost kernel: Demotion targets for Node 0: null
Dec 15 09:56:09 localhost kernel: cryptd: max_cpu_qlen set to 1000
Dec 15 09:56:09 localhost kernel: ACPI: Added _OSI(Module Device)
Dec 15 09:56:09 localhost kernel: ACPI: Added _OSI(Processor Device)
Dec 15 09:56:09 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 15 09:56:09 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 15 09:56:09 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 15 09:56:09 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec 15 09:56:09 localhost kernel: ACPI: Interpreter enabled
Dec 15 09:56:09 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec 15 09:56:09 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Dec 15 09:56:09 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 15 09:56:09 localhost kernel: PCI: Using E820 reservations for host bridge windows
Dec 15 09:56:09 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec 15 09:56:09 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 15 09:56:09 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [3] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [4] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [5] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [6] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [7] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [8] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [9] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [10] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [11] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [12] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [13] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [14] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [15] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [16] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [17] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [18] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [19] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [20] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [21] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [22] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [23] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [24] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [25] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [26] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [27] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [28] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [29] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [30] registered
Dec 15 09:56:09 localhost kernel: acpiphp: Slot [31] registered
Dec 15 09:56:09 localhost kernel: PCI host bridge to bus 0000:00
Dec 15 09:56:09 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 15 09:56:09 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 15 09:56:09 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 15 09:56:09 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 15 09:56:09 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec 15 09:56:09 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 15 09:56:09 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec 15 09:56:09 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec 15 09:56:09 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec 15 09:56:09 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec 15 09:56:09 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec 15 09:56:09 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec 15 09:56:09 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec 15 09:56:09 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec 15 09:56:09 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec 15 09:56:09 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec 15 09:56:09 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec 15 09:56:09 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec 15 09:56:09 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec 15 09:56:09 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec 15 09:56:09 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec 15 09:56:09 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec 15 09:56:09 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec 15 09:56:09 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec 15 09:56:09 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 15 09:56:09 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 15 09:56:09 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec 15 09:56:09 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec 15 09:56:09 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec 15 09:56:09 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec 15 09:56:09 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec 15 09:56:09 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec 15 09:56:09 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec 15 09:56:09 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec 15 09:56:09 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec 15 09:56:09 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec 15 09:56:09 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec 15 09:56:09 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec 15 09:56:09 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec 15 09:56:09 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec 15 09:56:09 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 15 09:56:09 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 15 09:56:09 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 15 09:56:09 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 15 09:56:09 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 15 09:56:09 localhost kernel: iommu: Default domain type: Translated
Dec 15 09:56:09 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec 15 09:56:09 localhost kernel: SCSI subsystem initialized
Dec 15 09:56:09 localhost kernel: ACPI: bus type USB registered
Dec 15 09:56:09 localhost kernel: usbcore: registered new interface driver usbfs
Dec 15 09:56:09 localhost kernel: usbcore: registered new interface driver hub
Dec 15 09:56:09 localhost kernel: usbcore: registered new device driver usb
Dec 15 09:56:09 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 15 09:56:09 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 15 09:56:09 localhost kernel: PTP clock support registered
Dec 15 09:56:09 localhost kernel: EDAC MC: Ver: 3.0.0
Dec 15 09:56:09 localhost kernel: NetLabel: Initializing
Dec 15 09:56:09 localhost kernel: NetLabel:  domain hash size = 128
Dec 15 09:56:09 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec 15 09:56:09 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Dec 15 09:56:09 localhost kernel: PCI: Using ACPI for IRQ routing
Dec 15 09:56:09 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 15 09:56:09 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Dec 15 09:56:09 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Dec 15 09:56:09 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec 15 09:56:09 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec 15 09:56:09 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 15 09:56:09 localhost kernel: vgaarb: loaded
Dec 15 09:56:09 localhost kernel: clocksource: Switched to clocksource kvm-clock
Dec 15 09:56:09 localhost kernel: VFS: Disk quotas dquot_6.6.0
Dec 15 09:56:09 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 15 09:56:09 localhost kernel: pnp: PnP ACPI init
Dec 15 09:56:09 localhost kernel: pnp 00:03: [dma 2]
Dec 15 09:56:09 localhost kernel: pnp: PnP ACPI: found 5 devices
Dec 15 09:56:09 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 15 09:56:09 localhost kernel: NET: Registered PF_INET protocol family
Dec 15 09:56:09 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 15 09:56:09 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec 15 09:56:09 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 15 09:56:09 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 15 09:56:09 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 15 09:56:09 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec 15 09:56:09 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec 15 09:56:09 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 15 09:56:09 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 15 09:56:09 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 15 09:56:09 localhost kernel: NET: Registered PF_XDP protocol family
Dec 15 09:56:09 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 15 09:56:09 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 15 09:56:09 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 15 09:56:09 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec 15 09:56:09 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec 15 09:56:09 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec 15 09:56:09 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 15 09:56:09 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec 15 09:56:09 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 82744 usecs
Dec 15 09:56:09 localhost kernel: PCI: CLS 0 bytes, default 64
Dec 15 09:56:09 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 15 09:56:09 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec 15 09:56:09 localhost kernel: Trying to unpack rootfs image as initramfs...
Dec 15 09:56:09 localhost kernel: ACPI: bus type thunderbolt registered
Dec 15 09:56:09 localhost kernel: Initialise system trusted keyrings
Dec 15 09:56:09 localhost kernel: Key type blacklist registered
Dec 15 09:56:09 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec 15 09:56:09 localhost kernel: zbud: loaded
Dec 15 09:56:09 localhost kernel: integrity: Platform Keyring initialized
Dec 15 09:56:09 localhost kernel: integrity: Machine keyring initialized
Dec 15 09:56:09 localhost kernel: Freeing initrd memory: 87820K
Dec 15 09:56:09 localhost kernel: NET: Registered PF_ALG protocol family
Dec 15 09:56:09 localhost kernel: xor: automatically using best checksumming function   avx       
Dec 15 09:56:09 localhost kernel: Key type asymmetric registered
Dec 15 09:56:09 localhost kernel: Asymmetric key parser 'x509' registered
Dec 15 09:56:09 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec 15 09:56:09 localhost kernel: io scheduler mq-deadline registered
Dec 15 09:56:09 localhost kernel: io scheduler kyber registered
Dec 15 09:56:09 localhost kernel: io scheduler bfq registered
Dec 15 09:56:09 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec 15 09:56:09 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec 15 09:56:09 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec 15 09:56:09 localhost kernel: ACPI: button: Power Button [PWRF]
Dec 15 09:56:09 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec 15 09:56:09 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec 15 09:56:09 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec 15 09:56:09 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 15 09:56:09 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 15 09:56:09 localhost kernel: Non-volatile memory driver v1.3
Dec 15 09:56:09 localhost kernel: rdac: device handler registered
Dec 15 09:56:09 localhost kernel: hp_sw: device handler registered
Dec 15 09:56:09 localhost kernel: emc: device handler registered
Dec 15 09:56:09 localhost kernel: alua: device handler registered
Dec 15 09:56:09 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec 15 09:56:09 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec 15 09:56:09 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec 15 09:56:09 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec 15 09:56:09 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec 15 09:56:09 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec 15 09:56:09 localhost kernel: usb usb1: Product: UHCI Host Controller
Dec 15 09:56:09 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-648.el9.x86_64 uhci_hcd
Dec 15 09:56:09 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec 15 09:56:09 localhost kernel: hub 1-0:1.0: USB hub found
Dec 15 09:56:09 localhost kernel: hub 1-0:1.0: 2 ports detected
Dec 15 09:56:09 localhost kernel: usbcore: registered new interface driver usbserial_generic
Dec 15 09:56:09 localhost kernel: usbserial: USB Serial support registered for generic
Dec 15 09:56:09 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 15 09:56:09 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 15 09:56:09 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 15 09:56:09 localhost kernel: mousedev: PS/2 mouse device common for all mice
Dec 15 09:56:09 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Dec 15 09:56:09 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec 15 09:56:09 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec 15 09:56:09 localhost kernel: rtc_cmos 00:04: registered as rtc0
Dec 15 09:56:09 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec 15 09:56:09 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-12-15T09:56:08 UTC (1765792568)
Dec 15 09:56:09 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec 15 09:56:09 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec 15 09:56:09 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 15 09:56:09 localhost kernel: usbcore: registered new interface driver usbhid
Dec 15 09:56:09 localhost kernel: usbhid: USB HID core driver
Dec 15 09:56:09 localhost kernel: drop_monitor: Initializing network drop monitor service
Dec 15 09:56:09 localhost kernel: Initializing XFRM netlink socket
Dec 15 09:56:09 localhost kernel: NET: Registered PF_INET6 protocol family
Dec 15 09:56:09 localhost kernel: Segment Routing with IPv6
Dec 15 09:56:09 localhost kernel: NET: Registered PF_PACKET protocol family
Dec 15 09:56:09 localhost kernel: mpls_gso: MPLS GSO support
Dec 15 09:56:09 localhost kernel: IPI shorthand broadcast: enabled
Dec 15 09:56:09 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Dec 15 09:56:09 localhost kernel: AES CTR mode by8 optimization enabled
Dec 15 09:56:09 localhost kernel: sched_clock: Marking stable (1298005139, 159872123)->(1549829896, -91952634)
Dec 15 09:56:09 localhost kernel: registered taskstats version 1
Dec 15 09:56:09 localhost kernel: Loading compiled-in X.509 certificates
Dec 15 09:56:09 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec 15 09:56:09 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec 15 09:56:09 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec 15 09:56:09 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec 15 09:56:09 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec 15 09:56:09 localhost kernel: Demotion targets for Node 0: null
Dec 15 09:56:09 localhost kernel: page_owner is disabled
Dec 15 09:56:09 localhost kernel: Key type .fscrypt registered
Dec 15 09:56:09 localhost kernel: Key type fscrypt-provisioning registered
Dec 15 09:56:09 localhost kernel: Key type big_key registered
Dec 15 09:56:09 localhost kernel: Key type encrypted registered
Dec 15 09:56:09 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 15 09:56:09 localhost kernel: Loading compiled-in module X.509 certificates
Dec 15 09:56:09 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec 15 09:56:09 localhost kernel: ima: Allocated hash algorithm: sha256
Dec 15 09:56:09 localhost kernel: ima: No architecture policies found
Dec 15 09:56:09 localhost kernel: evm: Initialising EVM extended attributes:
Dec 15 09:56:09 localhost kernel: evm: security.selinux
Dec 15 09:56:09 localhost kernel: evm: security.SMACK64 (disabled)
Dec 15 09:56:09 localhost kernel: evm: security.SMACK64EXEC (disabled)
Dec 15 09:56:09 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec 15 09:56:09 localhost kernel: evm: security.SMACK64MMAP (disabled)
Dec 15 09:56:09 localhost kernel: evm: security.apparmor (disabled)
Dec 15 09:56:09 localhost kernel: evm: security.ima
Dec 15 09:56:09 localhost kernel: evm: security.capability
Dec 15 09:56:09 localhost kernel: evm: HMAC attrs: 0x1
Dec 15 09:56:09 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec 15 09:56:09 localhost kernel: Running certificate verification RSA selftest
Dec 15 09:56:09 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec 15 09:56:09 localhost kernel: Running certificate verification ECDSA selftest
Dec 15 09:56:09 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec 15 09:56:09 localhost kernel: clk: Disabling unused clocks
Dec 15 09:56:09 localhost kernel: Freeing unused decrypted memory: 2028K
Dec 15 09:56:09 localhost kernel: Freeing unused kernel image (initmem) memory: 4192K
Dec 15 09:56:09 localhost kernel: Write protecting the kernel read-only data: 30720k
Dec 15 09:56:09 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Dec 15 09:56:09 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec 15 09:56:09 localhost kernel: Run /init as init process
Dec 15 09:56:09 localhost kernel:   with arguments:
Dec 15 09:56:09 localhost kernel:     /init
Dec 15 09:56:09 localhost kernel:   with environment:
Dec 15 09:56:09 localhost kernel:     HOME=/
Dec 15 09:56:09 localhost kernel:     TERM=linux
Dec 15 09:56:09 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64
Dec 15 09:56:09 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 15 09:56:09 localhost systemd[1]: Detected virtualization kvm.
Dec 15 09:56:09 localhost systemd[1]: Detected architecture x86-64.
Dec 15 09:56:09 localhost systemd[1]: Running in initrd.
Dec 15 09:56:09 localhost systemd[1]: No hostname configured, using default hostname.
Dec 15 09:56:09 localhost systemd[1]: Hostname set to <localhost>.
Dec 15 09:56:09 localhost systemd[1]: Initializing machine ID from VM UUID.
Dec 15 09:56:09 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec 15 09:56:09 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec 15 09:56:09 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Dec 15 09:56:09 localhost kernel: usb 1-1: Manufacturer: QEMU
Dec 15 09:56:09 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec 15 09:56:09 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec 15 09:56:09 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec 15 09:56:09 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Dec 15 09:56:09 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 15 09:56:09 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 15 09:56:09 localhost systemd[1]: Reached target Initrd /usr File System.
Dec 15 09:56:09 localhost systemd[1]: Reached target Local File Systems.
Dec 15 09:56:09 localhost systemd[1]: Reached target Path Units.
Dec 15 09:56:09 localhost systemd[1]: Reached target Slice Units.
Dec 15 09:56:09 localhost systemd[1]: Reached target Swaps.
Dec 15 09:56:09 localhost systemd[1]: Reached target Timer Units.
Dec 15 09:56:09 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 15 09:56:09 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Dec 15 09:56:09 localhost systemd[1]: Listening on Journal Socket.
Dec 15 09:56:09 localhost systemd[1]: Listening on udev Control Socket.
Dec 15 09:56:09 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 15 09:56:09 localhost systemd[1]: Reached target Socket Units.
Dec 15 09:56:09 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 15 09:56:09 localhost systemd[1]: Starting Journal Service...
Dec 15 09:56:09 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 15 09:56:09 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 15 09:56:09 localhost systemd[1]: Starting Create System Users...
Dec 15 09:56:09 localhost systemd[1]: Starting Setup Virtual Console...
Dec 15 09:56:09 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 15 09:56:09 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 15 09:56:09 localhost systemd[1]: Finished Create System Users.
Dec 15 09:56:09 localhost systemd-journald[308]: Journal started
Dec 15 09:56:09 localhost systemd-journald[308]: Runtime Journal (/run/log/journal/33a62224c13649ddbf900c345f3eee20) is 8.0M, max 153.6M, 145.6M free.
Dec 15 09:56:09 localhost systemd-sysusers[312]: Creating group 'users' with GID 100.
Dec 15 09:56:09 localhost systemd-sysusers[312]: Creating group 'dbus' with GID 81.
Dec 15 09:56:09 localhost systemd-sysusers[312]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec 15 09:56:09 localhost systemd[1]: Started Journal Service.
Dec 15 09:56:09 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 15 09:56:09 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 15 09:56:09 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 15 09:56:09 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 15 09:56:09 localhost systemd[1]: Finished Setup Virtual Console.
Dec 15 09:56:09 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec 15 09:56:09 localhost systemd[1]: Starting dracut cmdline hook...
Dec 15 09:56:09 localhost dracut-cmdline[328]: dracut-9 dracut-057-102.git20250818.el9
Dec 15 09:56:09 localhost dracut-cmdline[328]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 15 09:56:09 localhost systemd[1]: Finished dracut cmdline hook.
Dec 15 09:56:09 localhost systemd[1]: Starting dracut pre-udev hook...
Dec 15 09:56:09 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 15 09:56:09 localhost kernel: device-mapper: uevent: version 1.0.3
Dec 15 09:56:09 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec 15 09:56:09 localhost kernel: RPC: Registered named UNIX socket transport module.
Dec 15 09:56:09 localhost kernel: RPC: Registered udp transport module.
Dec 15 09:56:09 localhost kernel: RPC: Registered tcp transport module.
Dec 15 09:56:09 localhost kernel: RPC: Registered tcp-with-tls transport module.
Dec 15 09:56:09 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 15 09:56:09 localhost rpc.statd[444]: Version 2.5.4 starting
Dec 15 09:56:09 localhost rpc.statd[444]: Initializing NSM state
Dec 15 09:56:09 localhost rpc.idmapd[449]: Setting log level to 0
Dec 15 09:56:09 localhost systemd[1]: Finished dracut pre-udev hook.
Dec 15 09:56:09 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 15 09:56:09 localhost systemd-udevd[462]: Using default interface naming scheme 'rhel-9.0'.
Dec 15 09:56:09 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 15 09:56:09 localhost systemd[1]: Starting dracut pre-trigger hook...
Dec 15 09:56:09 localhost systemd[1]: Finished dracut pre-trigger hook.
Dec 15 09:56:09 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 15 09:56:09 localhost systemd[1]: Created slice Slice /system/modprobe.
Dec 15 09:56:09 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 15 09:56:09 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 15 09:56:09 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 15 09:56:09 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 15 09:56:09 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 15 09:56:09 localhost systemd[1]: Reached target Network.
Dec 15 09:56:09 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 15 09:56:09 localhost systemd[1]: Starting dracut initqueue hook...
Dec 15 09:56:09 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec 15 09:56:09 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec 15 09:56:09 localhost kernel:  vda: vda1
Dec 15 09:56:09 localhost kernel: libata version 3.00 loaded.
Dec 15 09:56:09 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Dec 15 09:56:09 localhost systemd-udevd[492]: Network interface NamePolicy= disabled on kernel command line.
Dec 15 09:56:09 localhost kernel: scsi host0: ata_piix
Dec 15 09:56:09 localhost kernel: scsi host1: ata_piix
Dec 15 09:56:09 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec 15 09:56:09 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec 15 09:56:09 localhost systemd[1]: Found device /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266.
Dec 15 09:56:09 localhost systemd[1]: Reached target Initrd Root Device.
Dec 15 09:56:10 localhost systemd[1]: Mounting Kernel Configuration File System...
Dec 15 09:56:10 localhost kernel: ata1: found unknown device (class 0)
Dec 15 09:56:10 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec 15 09:56:10 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec 15 09:56:10 localhost systemd[1]: Mounted Kernel Configuration File System.
Dec 15 09:56:10 localhost systemd[1]: Reached target System Initialization.
Dec 15 09:56:10 localhost systemd[1]: Reached target Basic System.
Dec 15 09:56:10 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec 15 09:56:10 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec 15 09:56:10 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 15 09:56:10 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Dec 15 09:56:10 localhost systemd[1]: Finished dracut initqueue hook.
Dec 15 09:56:10 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Dec 15 09:56:10 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Dec 15 09:56:10 localhost systemd[1]: Reached target Remote File Systems.
Dec 15 09:56:10 localhost systemd[1]: Starting dracut pre-mount hook...
Dec 15 09:56:10 localhost systemd[1]: Finished dracut pre-mount hook.
Dec 15 09:56:10 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266...
Dec 15 09:56:10 localhost systemd-fsck[558]: /usr/sbin/fsck.xfs: XFS file system.
Dec 15 09:56:10 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266.
Dec 15 09:56:10 localhost systemd[1]: Mounting /sysroot...
Dec 15 09:56:10 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec 15 09:56:10 localhost kernel: XFS (vda1): Mounting V5 Filesystem cbdedf45-ed1d-4952-82a8-33a12c0ba266
Dec 15 09:56:10 localhost kernel: XFS (vda1): Ending clean mount
Dec 15 09:56:10 localhost systemd[1]: Mounted /sysroot.
Dec 15 09:56:10 localhost systemd[1]: Reached target Initrd Root File System.
Dec 15 09:56:10 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec 15 09:56:10 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 15 09:56:10 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec 15 09:56:10 localhost systemd[1]: Reached target Initrd File Systems.
Dec 15 09:56:10 localhost systemd[1]: Reached target Initrd Default Target.
Dec 15 09:56:10 localhost systemd[1]: Starting dracut mount hook...
Dec 15 09:56:10 localhost systemd[1]: Finished dracut mount hook.
Dec 15 09:56:10 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec 15 09:56:10 localhost rpc.idmapd[449]: exiting on signal 15
Dec 15 09:56:10 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec 15 09:56:11 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec 15 09:56:11 localhost systemd[1]: Stopped target Network.
Dec 15 09:56:11 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Dec 15 09:56:11 localhost systemd[1]: Stopped target Timer Units.
Dec 15 09:56:11 localhost systemd[1]: dbus.socket: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Dec 15 09:56:11 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec 15 09:56:11 localhost systemd[1]: Stopped target Initrd Default Target.
Dec 15 09:56:11 localhost systemd[1]: Stopped target Basic System.
Dec 15 09:56:11 localhost systemd[1]: Stopped target Initrd Root Device.
Dec 15 09:56:11 localhost systemd[1]: Stopped target Initrd /usr File System.
Dec 15 09:56:11 localhost systemd[1]: Stopped target Path Units.
Dec 15 09:56:11 localhost systemd[1]: Stopped target Remote File Systems.
Dec 15 09:56:11 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Dec 15 09:56:11 localhost systemd[1]: Stopped target Slice Units.
Dec 15 09:56:11 localhost systemd[1]: Stopped target Socket Units.
Dec 15 09:56:11 localhost systemd[1]: Stopped target System Initialization.
Dec 15 09:56:11 localhost systemd[1]: Stopped target Local File Systems.
Dec 15 09:56:11 localhost systemd[1]: Stopped target Swaps.
Dec 15 09:56:11 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Stopped dracut mount hook.
Dec 15 09:56:11 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Stopped dracut pre-mount hook.
Dec 15 09:56:11 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Dec 15 09:56:11 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec 15 09:56:11 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Stopped dracut initqueue hook.
Dec 15 09:56:11 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Stopped Apply Kernel Variables.
Dec 15 09:56:11 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Dec 15 09:56:11 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Stopped Coldplug All udev Devices.
Dec 15 09:56:11 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Stopped dracut pre-trigger hook.
Dec 15 09:56:11 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 15 09:56:11 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Stopped Setup Virtual Console.
Dec 15 09:56:11 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec 15 09:56:11 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 15 09:56:11 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Closed udev Control Socket.
Dec 15 09:56:11 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Closed udev Kernel Socket.
Dec 15 09:56:11 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Stopped dracut pre-udev hook.
Dec 15 09:56:11 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Stopped dracut cmdline hook.
Dec 15 09:56:11 localhost systemd[1]: Starting Cleanup udev Database...
Dec 15 09:56:11 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec 15 09:56:11 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Dec 15 09:56:11 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Stopped Create System Users.
Dec 15 09:56:11 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Finished Cleanup udev Database.
Dec 15 09:56:11 localhost systemd[1]: Reached target Switch Root.
Dec 15 09:56:11 localhost systemd[1]: Starting Switch Root...
Dec 15 09:56:11 localhost systemd[1]: Switching root.
Dec 15 09:56:11 localhost systemd-journald[308]: Journal stopped
Dec 15 09:56:11 localhost systemd-journald[308]: Received SIGTERM from PID 1 (systemd).
Dec 15 09:56:11 localhost kernel: audit: type=1404 audit(1765792571.232:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec 15 09:56:11 localhost kernel: SELinux:  policy capability network_peer_controls=1
Dec 15 09:56:11 localhost kernel: SELinux:  policy capability open_perms=1
Dec 15 09:56:11 localhost kernel: SELinux:  policy capability extended_socket_class=1
Dec 15 09:56:11 localhost kernel: SELinux:  policy capability always_check_network=0
Dec 15 09:56:11 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 15 09:56:11 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 15 09:56:11 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 15 09:56:11 localhost kernel: audit: type=1403 audit(1765792571.362:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 15 09:56:11 localhost systemd[1]: Successfully loaded SELinux policy in 133.469ms.
Dec 15 09:56:11 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 31.770ms.
Dec 15 09:56:11 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 15 09:56:11 localhost systemd[1]: Detected virtualization kvm.
Dec 15 09:56:11 localhost systemd[1]: Detected architecture x86-64.
Dec 15 09:56:11 localhost systemd-rc-local-generator[639]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 09:56:11 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Stopped Switch Root.
Dec 15 09:56:11 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 15 09:56:11 localhost systemd[1]: Created slice Slice /system/getty.
Dec 15 09:56:11 localhost systemd[1]: Created slice Slice /system/serial-getty.
Dec 15 09:56:11 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Dec 15 09:56:11 localhost systemd[1]: Created slice User and Session Slice.
Dec 15 09:56:11 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 15 09:56:11 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Dec 15 09:56:11 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec 15 09:56:11 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 15 09:56:11 localhost systemd[1]: Stopped target Switch Root.
Dec 15 09:56:11 localhost systemd[1]: Stopped target Initrd File Systems.
Dec 15 09:56:11 localhost systemd[1]: Stopped target Initrd Root File System.
Dec 15 09:56:11 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Dec 15 09:56:11 localhost systemd[1]: Reached target Path Units.
Dec 15 09:56:11 localhost systemd[1]: Reached target rpc_pipefs.target.
Dec 15 09:56:11 localhost systemd[1]: Reached target Slice Units.
Dec 15 09:56:11 localhost systemd[1]: Reached target Swaps.
Dec 15 09:56:11 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Dec 15 09:56:11 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Dec 15 09:56:11 localhost systemd[1]: Reached target RPC Port Mapper.
Dec 15 09:56:11 localhost systemd[1]: Listening on Process Core Dump Socket.
Dec 15 09:56:11 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Dec 15 09:56:11 localhost systemd[1]: Listening on udev Control Socket.
Dec 15 09:56:11 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 15 09:56:11 localhost systemd[1]: Mounting Huge Pages File System...
Dec 15 09:56:11 localhost systemd[1]: Mounting POSIX Message Queue File System...
Dec 15 09:56:11 localhost systemd[1]: Mounting Kernel Debug File System...
Dec 15 09:56:11 localhost systemd[1]: Mounting Kernel Trace File System...
Dec 15 09:56:11 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 15 09:56:11 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 15 09:56:11 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 15 09:56:11 localhost systemd[1]: Starting Load Kernel Module drm...
Dec 15 09:56:11 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Dec 15 09:56:11 localhost systemd[1]: Starting Load Kernel Module fuse...
Dec 15 09:56:11 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec 15 09:56:11 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Stopped File System Check on Root Device.
Dec 15 09:56:11 localhost systemd[1]: Stopped Journal Service.
Dec 15 09:56:11 localhost kernel: fuse: init (API version 7.37)
Dec 15 09:56:11 localhost systemd[1]: Starting Journal Service...
Dec 15 09:56:11 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 15 09:56:11 localhost systemd[1]: Starting Generate network units from Kernel command line...
Dec 15 09:56:11 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 15 09:56:11 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Dec 15 09:56:11 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 15 09:56:11 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 15 09:56:11 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec 15 09:56:11 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 15 09:56:11 localhost systemd-journald[680]: Journal started
Dec 15 09:56:11 localhost systemd-journald[680]: Runtime Journal (/run/log/journal/64f1d6692049d8be5e8b216cc203502c) is 8.0M, max 153.6M, 145.6M free.
Dec 15 09:56:11 localhost systemd[1]: Queued start job for default target Multi-User System.
Dec 15 09:56:11 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Started Journal Service.
Dec 15 09:56:11 localhost systemd[1]: Mounted Huge Pages File System.
Dec 15 09:56:11 localhost systemd[1]: Mounted POSIX Message Queue File System.
Dec 15 09:56:11 localhost systemd[1]: Mounted Kernel Debug File System.
Dec 15 09:56:11 localhost systemd[1]: Mounted Kernel Trace File System.
Dec 15 09:56:11 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 15 09:56:11 localhost kernel: ACPI: bus type drm_connector registered
Dec 15 09:56:11 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 15 09:56:11 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Finished Load Kernel Module drm.
Dec 15 09:56:11 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Dec 15 09:56:11 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 15 09:56:11 localhost systemd[1]: Finished Load Kernel Module fuse.
Dec 15 09:56:11 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec 15 09:56:11 localhost systemd[1]: Finished Generate network units from Kernel command line.
Dec 15 09:56:11 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Dec 15 09:56:11 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 15 09:56:11 localhost systemd[1]: Mounting FUSE Control File System...
Dec 15 09:56:11 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 15 09:56:11 localhost systemd[1]: Starting Rebuild Hardware Database...
Dec 15 09:56:11 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Dec 15 09:56:11 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 15 09:56:11 localhost systemd[1]: Starting Load/Save OS Random Seed...
Dec 15 09:56:11 localhost systemd[1]: Starting Create System Users...
Dec 15 09:56:11 localhost systemd[1]: Mounted FUSE Control File System.
Dec 15 09:56:11 localhost systemd-journald[680]: Runtime Journal (/run/log/journal/64f1d6692049d8be5e8b216cc203502c) is 8.0M, max 153.6M, 145.6M free.
Dec 15 09:56:11 localhost systemd-journald[680]: Received client request to flush runtime journal.
Dec 15 09:56:11 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Dec 15 09:56:11 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 15 09:56:11 localhost systemd[1]: Finished Create System Users.
Dec 15 09:56:12 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 15 09:56:12 localhost systemd[1]: Finished Load/Save OS Random Seed.
Dec 15 09:56:12 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 15 09:56:12 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 15 09:56:12 localhost systemd[1]: Reached target Preparation for Local File Systems.
Dec 15 09:56:12 localhost systemd[1]: Reached target Local File Systems.
Dec 15 09:56:12 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec 15 09:56:12 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec 15 09:56:12 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 15 09:56:12 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec 15 09:56:12 localhost systemd[1]: Starting Automatic Boot Loader Update...
Dec 15 09:56:12 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec 15 09:56:12 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 15 09:56:12 localhost bootctl[697]: Couldn't find EFI system partition, skipping.
Dec 15 09:56:12 localhost systemd[1]: Finished Automatic Boot Loader Update.
Dec 15 09:56:12 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 15 09:56:12 localhost systemd[1]: Starting Security Auditing Service...
Dec 15 09:56:12 localhost systemd[1]: Starting RPC Bind...
Dec 15 09:56:12 localhost systemd[1]: Starting Rebuild Journal Catalog...
Dec 15 09:56:12 localhost auditd[703]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec 15 09:56:12 localhost auditd[703]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec 15 09:56:12 localhost systemd[1]: Started RPC Bind.
Dec 15 09:56:12 localhost systemd[1]: Finished Rebuild Journal Catalog.
Dec 15 09:56:12 localhost augenrules[708]: /sbin/augenrules: No change
Dec 15 09:56:12 localhost augenrules[723]: No rules
Dec 15 09:56:12 localhost augenrules[723]: enabled 1
Dec 15 09:56:12 localhost augenrules[723]: failure 1
Dec 15 09:56:12 localhost augenrules[723]: pid 703
Dec 15 09:56:12 localhost augenrules[723]: rate_limit 0
Dec 15 09:56:12 localhost augenrules[723]: backlog_limit 8192
Dec 15 09:56:12 localhost augenrules[723]: lost 0
Dec 15 09:56:12 localhost augenrules[723]: backlog 4
Dec 15 09:56:12 localhost augenrules[723]: backlog_wait_time 60000
Dec 15 09:56:12 localhost augenrules[723]: backlog_wait_time_actual 0
Dec 15 09:56:12 localhost augenrules[723]: enabled 1
Dec 15 09:56:12 localhost augenrules[723]: failure 1
Dec 15 09:56:12 localhost augenrules[723]: pid 703
Dec 15 09:56:12 localhost augenrules[723]: rate_limit 0
Dec 15 09:56:12 localhost augenrules[723]: backlog_limit 8192
Dec 15 09:56:12 localhost augenrules[723]: lost 0
Dec 15 09:56:12 localhost augenrules[723]: backlog 0
Dec 15 09:56:12 localhost augenrules[723]: backlog_wait_time 60000
Dec 15 09:56:12 localhost augenrules[723]: backlog_wait_time_actual 0
Dec 15 09:56:12 localhost augenrules[723]: enabled 1
Dec 15 09:56:12 localhost augenrules[723]: failure 1
Dec 15 09:56:12 localhost augenrules[723]: pid 703
Dec 15 09:56:12 localhost augenrules[723]: rate_limit 0
Dec 15 09:56:12 localhost augenrules[723]: backlog_limit 8192
Dec 15 09:56:12 localhost augenrules[723]: lost 0
Dec 15 09:56:12 localhost augenrules[723]: backlog 2
Dec 15 09:56:12 localhost augenrules[723]: backlog_wait_time 60000
Dec 15 09:56:12 localhost augenrules[723]: backlog_wait_time_actual 0
Dec 15 09:56:12 localhost systemd[1]: Started Security Auditing Service.
Dec 15 09:56:12 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec 15 09:56:12 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec 15 09:56:12 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec 15 09:56:13 localhost systemd[1]: Finished Rebuild Hardware Database.
Dec 15 09:56:13 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 15 09:56:13 localhost systemd[1]: Starting Update is Completed...
Dec 15 09:56:13 localhost systemd-udevd[731]: Using default interface naming scheme 'rhel-9.0'.
Dec 15 09:56:13 localhost systemd[1]: Finished Update is Completed.
Dec 15 09:56:13 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 15 09:56:13 localhost systemd[1]: Reached target System Initialization.
Dec 15 09:56:13 localhost systemd[1]: Started dnf makecache --timer.
Dec 15 09:56:13 localhost systemd[1]: Started Daily rotation of log files.
Dec 15 09:56:13 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec 15 09:56:13 localhost systemd[1]: Reached target Timer Units.
Dec 15 09:56:13 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 15 09:56:13 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec 15 09:56:13 localhost systemd[1]: Reached target Socket Units.
Dec 15 09:56:13 localhost systemd[1]: Starting D-Bus System Message Bus...
Dec 15 09:56:13 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 15 09:56:13 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec 15 09:56:13 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 15 09:56:13 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 15 09:56:13 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 15 09:56:13 localhost systemd-udevd[740]: Network interface NamePolicy= disabled on kernel command line.
Dec 15 09:56:13 localhost systemd[1]: Started D-Bus System Message Bus.
Dec 15 09:56:13 localhost systemd[1]: Reached target Basic System.
Dec 15 09:56:13 localhost dbus-broker-lau[752]: Ready
Dec 15 09:56:13 localhost systemd[1]: Starting NTP client/server...
Dec 15 09:56:13 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec 15 09:56:13 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec 15 09:56:13 localhost chronyd[782]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 15 09:56:13 localhost chronyd[782]: Loaded 0 symmetric keys
Dec 15 09:56:13 localhost chronyd[782]: Using right/UTC timezone to obtain leap second data
Dec 15 09:56:13 localhost chronyd[782]: Loaded seccomp filter (level 2)
Dec 15 09:56:13 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec 15 09:56:13 localhost systemd[1]: Starting IPv4 firewall with iptables...
Dec 15 09:56:13 localhost systemd[1]: Started irqbalance daemon.
Dec 15 09:56:13 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec 15 09:56:13 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 15 09:56:13 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 15 09:56:13 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 15 09:56:13 localhost systemd[1]: Reached target sshd-keygen.target.
Dec 15 09:56:13 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec 15 09:56:13 localhost systemd[1]: Reached target User and Group Name Lookups.
Dec 15 09:56:13 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec 15 09:56:13 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec 15 09:56:13 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec 15 09:56:13 localhost systemd[1]: Starting User Login Management...
Dec 15 09:56:13 localhost systemd[1]: Started NTP client/server.
Dec 15 09:56:13 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec 15 09:56:13 localhost kernel: kvm_amd: TSC scaling supported
Dec 15 09:56:13 localhost kernel: kvm_amd: Nested Virtualization enabled
Dec 15 09:56:13 localhost kernel: kvm_amd: Nested Paging enabled
Dec 15 09:56:13 localhost kernel: kvm_amd: LBR virtualization supported
Dec 15 09:56:13 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec 15 09:56:13 localhost systemd-logind[797]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 15 09:56:13 localhost systemd-logind[797]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 15 09:56:13 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec 15 09:56:13 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec 15 09:56:13 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec 15 09:56:13 localhost systemd-logind[797]: New seat seat0.
Dec 15 09:56:13 localhost kernel: Console: switching to colour dummy device 80x25
Dec 15 09:56:13 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec 15 09:56:13 localhost kernel: [drm] features: -context_init
Dec 15 09:56:13 localhost systemd[1]: Started User Login Management.
Dec 15 09:56:13 localhost kernel: [drm] number of scanouts: 1
Dec 15 09:56:13 localhost kernel: [drm] number of cap sets: 0
Dec 15 09:56:13 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec 15 09:56:13 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec 15 09:56:13 localhost kernel: Console: switching to colour frame buffer device 128x48
Dec 15 09:56:13 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec 15 09:56:13 localhost iptables.init[786]: iptables: Applying firewall rules: [  OK  ]
Dec 15 09:56:13 localhost systemd[1]: Finished IPv4 firewall with iptables.
Dec 15 09:56:13 localhost cloud-init[840]: Cloud-init v. 24.4-7.el9 running 'init-local' at Mon, 15 Dec 2025 09:56:13 +0000. Up 6.54 seconds.
Dec 15 09:56:13 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Dec 15 09:56:13 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Dec 15 09:56:13 localhost systemd[1]: run-cloud\x2dinit-tmp-tmp0mckq52t.mount: Deactivated successfully.
Dec 15 09:56:14 localhost systemd[1]: Starting Hostname Service...
Dec 15 09:56:14 localhost systemd[1]: Started Hostname Service.
Dec 15 09:56:14 np0005559875.novalocal systemd-hostnamed[854]: Hostname set to <np0005559875.novalocal> (static)
Dec 15 09:56:14 np0005559875.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec 15 09:56:14 np0005559875.novalocal systemd[1]: Reached target Preparation for Network.
Dec 15 09:56:14 np0005559875.novalocal systemd[1]: Starting Network Manager...
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3380] NetworkManager (version 1.54.2-1.el9) is starting... (boot:f0a48d23-f548-4261-85f3-3468dc8c15f7)
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3384] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3469] manager[0x559ffcf2b000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3520] hostname: hostname: using hostnamed
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3521] hostname: static hostname changed from (none) to "np0005559875.novalocal"
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3526] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3703] manager[0x559ffcf2b000]: rfkill: Wi-Fi hardware radio set enabled
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3705] manager[0x559ffcf2b000]: rfkill: WWAN hardware radio set enabled
Dec 15 09:56:14 np0005559875.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3761] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3763] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3764] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3765] manager: Networking is enabled by state file
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3768] settings: Loaded settings plugin: keyfile (internal)
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3783] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3804] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3820] dhcp: init: Using DHCP client 'internal'
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3825] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3838] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3847] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3854] device (lo): Activation: starting connection 'lo' (e64a39bd-9875-4e86-a1ed-975879eaa15a)
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3863] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3868] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3892] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3896] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3900] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3903] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3906] device (eth0): carrier: link connected
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3909] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3915] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3922] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3926] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3928] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3930] manager: NetworkManager state is now CONNECTING
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3932] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3940] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.3943] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 15 09:56:14 np0005559875.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 15 09:56:14 np0005559875.novalocal systemd[1]: Started Network Manager.
Dec 15 09:56:14 np0005559875.novalocal systemd[1]: Reached target Network.
Dec 15 09:56:14 np0005559875.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 15 09:56:14 np0005559875.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Dec 15 09:56:14 np0005559875.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.4212] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.4215] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 15 09:56:14 np0005559875.novalocal NetworkManager[858]: <info>  [1765792574.4223] device (lo): Activation: successful, device activated.
Dec 15 09:56:14 np0005559875.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Dec 15 09:56:14 np0005559875.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 15 09:56:14 np0005559875.novalocal systemd[1]: Reached target NFS client services.
Dec 15 09:56:14 np0005559875.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Dec 15 09:56:14 np0005559875.novalocal systemd[1]: Reached target Remote File Systems.
Dec 15 09:56:14 np0005559875.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 15 09:56:16 np0005559875.novalocal NetworkManager[858]: <info>  [1765792576.0015] dhcp4 (eth0): state changed new lease, address=38.102.83.5
Dec 15 09:56:16 np0005559875.novalocal NetworkManager[858]: <info>  [1765792576.0026] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 15 09:56:16 np0005559875.novalocal NetworkManager[858]: <info>  [1765792576.0052] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 15 09:56:16 np0005559875.novalocal NetworkManager[858]: <info>  [1765792576.0076] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 15 09:56:16 np0005559875.novalocal NetworkManager[858]: <info>  [1765792576.0077] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 15 09:56:16 np0005559875.novalocal NetworkManager[858]: <info>  [1765792576.0080] manager: NetworkManager state is now CONNECTED_SITE
Dec 15 09:56:16 np0005559875.novalocal NetworkManager[858]: <info>  [1765792576.0082] device (eth0): Activation: successful, device activated.
Dec 15 09:56:16 np0005559875.novalocal NetworkManager[858]: <info>  [1765792576.0085] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 15 09:56:16 np0005559875.novalocal NetworkManager[858]: <info>  [1765792576.0087] manager: startup complete
Dec 15 09:56:16 np0005559875.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 15 09:56:16 np0005559875.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Dec 15 09:56:16 np0005559875.novalocal cloud-init[922]: Cloud-init v. 24.4-7.el9 running 'init' at Mon, 15 Dec 2025 09:56:16 +0000. Up 9.09 seconds.
Dec 15 09:56:16 np0005559875.novalocal cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec 15 09:56:16 np0005559875.novalocal cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 15 09:56:16 np0005559875.novalocal cloud-init[922]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec 15 09:56:16 np0005559875.novalocal cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 15 09:56:16 np0005559875.novalocal cloud-init[922]: ci-info: |  eth0  | True |         38.102.83.5          | 255.255.255.0 | global | fa:16:3e:01:f3:33 |
Dec 15 09:56:16 np0005559875.novalocal cloud-init[922]: ci-info: |  eth0  | True | fe80::f816:3eff:fe01:f333/64 |       .       |  link  | fa:16:3e:01:f3:33 |
Dec 15 09:56:16 np0005559875.novalocal cloud-init[922]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec 15 09:56:16 np0005559875.novalocal cloud-init[922]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec 15 09:56:16 np0005559875.novalocal cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 15 09:56:16 np0005559875.novalocal cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec 15 09:56:16 np0005559875.novalocal cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 15 09:56:16 np0005559875.novalocal cloud-init[922]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec 15 09:56:16 np0005559875.novalocal cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 15 09:56:16 np0005559875.novalocal cloud-init[922]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec 15 09:56:16 np0005559875.novalocal cloud-init[922]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec 15 09:56:16 np0005559875.novalocal cloud-init[922]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec 15 09:56:16 np0005559875.novalocal cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 15 09:56:16 np0005559875.novalocal cloud-init[922]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec 15 09:56:16 np0005559875.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 15 09:56:16 np0005559875.novalocal cloud-init[922]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec 15 09:56:16 np0005559875.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 15 09:56:16 np0005559875.novalocal cloud-init[922]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec 15 09:56:16 np0005559875.novalocal cloud-init[922]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Dec 15 09:56:16 np0005559875.novalocal cloud-init[922]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Dec 15 09:56:16 np0005559875.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 15 09:56:20 np0005559875.novalocal useradd[990]: new group: name=cloud-user, GID=1001
Dec 15 09:56:20 np0005559875.novalocal useradd[990]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Dec 15 09:56:20 np0005559875.novalocal useradd[990]: add 'cloud-user' to group 'adm'
Dec 15 09:56:20 np0005559875.novalocal useradd[990]: add 'cloud-user' to group 'systemd-journal'
Dec 15 09:56:20 np0005559875.novalocal useradd[990]: add 'cloud-user' to shadow group 'adm'
Dec 15 09:56:20 np0005559875.novalocal useradd[990]: add 'cloud-user' to shadow group 'systemd-journal'
Dec 15 09:56:20 np0005559875.novalocal chronyd[782]: Selected source 206.108.0.131 (2.centos.pool.ntp.org)
Dec 15 09:56:20 np0005559875.novalocal chronyd[782]: System clock TAI offset set to 37 seconds
Dec 15 09:56:22 np0005559875.novalocal chronyd[782]: Selected source 147.189.136.126 (2.centos.pool.ntp.org)
Dec 15 09:56:23 np0005559875.novalocal irqbalance[793]: Cannot change IRQ 25 affinity: Operation not permitted
Dec 15 09:56:23 np0005559875.novalocal irqbalance[793]: IRQ 25 affinity is now unmanaged
Dec 15 09:56:23 np0005559875.novalocal irqbalance[793]: Cannot change IRQ 31 affinity: Operation not permitted
Dec 15 09:56:23 np0005559875.novalocal irqbalance[793]: IRQ 31 affinity is now unmanaged
Dec 15 09:56:23 np0005559875.novalocal irqbalance[793]: Cannot change IRQ 28 affinity: Operation not permitted
Dec 15 09:56:23 np0005559875.novalocal irqbalance[793]: IRQ 28 affinity is now unmanaged
Dec 15 09:56:23 np0005559875.novalocal irqbalance[793]: Cannot change IRQ 32 affinity: Operation not permitted
Dec 15 09:56:23 np0005559875.novalocal irqbalance[793]: IRQ 32 affinity is now unmanaged
Dec 15 09:56:23 np0005559875.novalocal irqbalance[793]: Cannot change IRQ 30 affinity: Operation not permitted
Dec 15 09:56:23 np0005559875.novalocal irqbalance[793]: IRQ 30 affinity is now unmanaged
Dec 15 09:56:23 np0005559875.novalocal irqbalance[793]: Cannot change IRQ 29 affinity: Operation not permitted
Dec 15 09:56:23 np0005559875.novalocal irqbalance[793]: IRQ 29 affinity is now unmanaged
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: Generating public/private rsa key pair.
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: The key fingerprint is:
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: SHA256:3RRzYytuLq0XXPCB2YXlEWqMXtNTGx9W4NLDWtLfFss root@np0005559875.novalocal
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: The key's randomart image is:
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: +---[RSA 3072]----+
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: |            o+=@B|
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: |            *XB+B|
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: |           .=OO*o|
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: |         ..+o*=o=|
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: |        S .o=. E+|
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: |           +o  . |
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: |          . o.   |
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: |           o.    |
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: |          ..     |
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: +----[SHA256]-----+
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: Generating public/private ecdsa key pair.
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: The key fingerprint is:
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: SHA256:xCUsJjn0v5+vrw9ih2nz8SGt9KnfZPYlPLjlllafVcc root@np0005559875.novalocal
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: The key's randomart image is:
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: +---[ECDSA 256]---+
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: |   ... .. .      |
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: |    +.o..o       |
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: |     +..o      . |
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: |       o        E|
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: |        S       o|
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: |         + . o  o|
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: |        O * + Oo=|
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: |       o B O @+=o|
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: |          OOXoo .|
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: +----[SHA256]-----+
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: Generating public/private ed25519 key pair.
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: The key fingerprint is:
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: SHA256:clCo5bDrgkPouVvrN/z7S37y+OEddnM8LiK1/Kk3kHQ root@np0005559875.novalocal
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: The key's randomart image is:
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: +--[ED25519 256]--+
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: |       ..        |
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: |    . o.         |
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: |     *.          |
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: |    o ..   . E   |
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: |.    .. S . o    |
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: |..  .  o   +   . |
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: |o..o.    .o.oo +o|
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: |ooo o+  ooo=++= +|
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: | +++. ooo=B==+o. |
Dec 15 09:56:24 np0005559875.novalocal cloud-init[922]: +----[SHA256]-----+
Dec 15 09:56:24 np0005559875.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Dec 15 09:56:24 np0005559875.novalocal systemd[1]: Reached target Cloud-config availability.
Dec 15 09:56:24 np0005559875.novalocal systemd[1]: Reached target Network is Online.
Dec 15 09:56:24 np0005559875.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Dec 15 09:56:24 np0005559875.novalocal systemd[1]: Starting Crash recovery kernel arming...
Dec 15 09:56:24 np0005559875.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Dec 15 09:56:24 np0005559875.novalocal systemd[1]: Starting System Logging Service...
Dec 15 09:56:24 np0005559875.novalocal systemd[1]: Starting OpenSSH server daemon...
Dec 15 09:56:24 np0005559875.novalocal sm-notify[1006]: Version 2.5.4 starting
Dec 15 09:56:24 np0005559875.novalocal systemd[1]: Starting Permit User Sessions...
Dec 15 09:56:24 np0005559875.novalocal systemd[1]: Started Notify NFS peers of a restart.
Dec 15 09:56:24 np0005559875.novalocal sshd[1008]: Server listening on 0.0.0.0 port 22.
Dec 15 09:56:24 np0005559875.novalocal sshd[1008]: Server listening on :: port 22.
Dec 15 09:56:24 np0005559875.novalocal systemd[1]: Started OpenSSH server daemon.
Dec 15 09:56:24 np0005559875.novalocal systemd[1]: Finished Permit User Sessions.
Dec 15 09:56:24 np0005559875.novalocal systemd[1]: Started Command Scheduler.
Dec 15 09:56:24 np0005559875.novalocal systemd[1]: Started Getty on tty1.
Dec 15 09:56:24 np0005559875.novalocal crond[1013]: (CRON) STARTUP (1.5.7)
Dec 15 09:56:24 np0005559875.novalocal crond[1013]: (CRON) INFO (Syslog will be used instead of sendmail.)
Dec 15 09:56:24 np0005559875.novalocal crond[1013]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 93% if used.)
Dec 15 09:56:24 np0005559875.novalocal crond[1013]: (CRON) INFO (running with inotify support)
Dec 15 09:56:24 np0005559875.novalocal systemd[1]: Started Serial Getty on ttyS0.
Dec 15 09:56:24 np0005559875.novalocal systemd[1]: Reached target Login Prompts.
Dec 15 09:56:24 np0005559875.novalocal rsyslogd[1007]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1007" x-info="https://www.rsyslog.com"] start
Dec 15 09:56:24 np0005559875.novalocal systemd[1]: Started System Logging Service.
Dec 15 09:56:24 np0005559875.novalocal rsyslogd[1007]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec 15 09:56:24 np0005559875.novalocal systemd[1]: Reached target Multi-User System.
Dec 15 09:56:24 np0005559875.novalocal sshd-session[1020]: Unable to negotiate with 38.102.83.114 port 45296: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Dec 15 09:56:24 np0005559875.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Dec 15 09:56:24 np0005559875.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 15 09:56:24 np0005559875.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Dec 15 09:56:24 np0005559875.novalocal rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 15 09:56:24 np0005559875.novalocal sshd-session[1039]: Unable to negotiate with 38.102.83.114 port 45328: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Dec 15 09:56:24 np0005559875.novalocal sshd-session[1049]: Unable to negotiate with 38.102.83.114 port 45344: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Dec 15 09:56:24 np0005559875.novalocal sshd-session[1059]: Connection reset by 38.102.83.114 port 45356 [preauth]
Dec 15 09:56:24 np0005559875.novalocal sshd-session[1012]: Connection closed by 38.102.83.114 port 45282 [preauth]
Dec 15 09:56:24 np0005559875.novalocal sshd-session[1079]: Unable to negotiate with 38.102.83.114 port 45372: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Dec 15 09:56:24 np0005559875.novalocal sshd-session[1032]: Connection closed by 38.102.83.114 port 45312 [preauth]
Dec 15 09:56:24 np0005559875.novalocal sshd-session[1084]: Unable to negotiate with 38.102.83.114 port 45380: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Dec 15 09:56:24 np0005559875.novalocal sshd-session[1072]: Connection closed by 38.102.83.114 port 45366 [preauth]
Dec 15 09:56:24 np0005559875.novalocal cloud-init[1088]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Mon, 15 Dec 2025 09:56:24 +0000. Up 17.50 seconds.
Dec 15 09:56:24 np0005559875.novalocal kdumpctl[1019]: kdump: No kdump initial ramdisk found.
Dec 15 09:56:24 np0005559875.novalocal kdumpctl[1019]: kdump: Rebuilding /boot/initramfs-5.14.0-648.el9.x86_64kdump.img
Dec 15 09:56:24 np0005559875.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Dec 15 09:56:24 np0005559875.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Dec 15 09:56:25 np0005559875.novalocal cloud-init[1214]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Mon, 15 Dec 2025 09:56:25 +0000. Up 18.00 seconds.
Dec 15 09:56:25 np0005559875.novalocal cloud-init[1247]: #############################################################
Dec 15 09:56:25 np0005559875.novalocal cloud-init[1250]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec 15 09:56:25 np0005559875.novalocal cloud-init[1254]: 256 SHA256:xCUsJjn0v5+vrw9ih2nz8SGt9KnfZPYlPLjlllafVcc root@np0005559875.novalocal (ECDSA)
Dec 15 09:56:25 np0005559875.novalocal cloud-init[1260]: 256 SHA256:clCo5bDrgkPouVvrN/z7S37y+OEddnM8LiK1/Kk3kHQ root@np0005559875.novalocal (ED25519)
Dec 15 09:56:25 np0005559875.novalocal cloud-init[1267]: 3072 SHA256:3RRzYytuLq0XXPCB2YXlEWqMXtNTGx9W4NLDWtLfFss root@np0005559875.novalocal (RSA)
Dec 15 09:56:25 np0005559875.novalocal cloud-init[1270]: -----END SSH HOST KEY FINGERPRINTS-----
Dec 15 09:56:25 np0005559875.novalocal cloud-init[1272]: #############################################################
Dec 15 09:56:25 np0005559875.novalocal cloud-init[1214]: Cloud-init v. 24.4-7.el9 finished at Mon, 15 Dec 2025 09:56:25 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 18.21 seconds
Dec 15 09:56:25 np0005559875.novalocal dracut[1303]: dracut-057-102.git20250818.el9
Dec 15 09:56:25 np0005559875.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Dec 15 09:56:25 np0005559875.novalocal systemd[1]: Reached target Cloud-init target.
Dec 15 09:56:25 np0005559875.novalocal dracut[1305]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-648.el9.x86_64kdump.img 5.14.0-648.el9.x86_64
Dec 15 09:56:26 np0005559875.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: Module 'resume' will not be installed, because it's in the list to be omitted!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: memstrack is not available
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 15 09:56:26 np0005559875.novalocal dracut[1305]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 15 09:56:27 np0005559875.novalocal dracut[1305]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 15 09:56:27 np0005559875.novalocal dracut[1305]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 15 09:56:27 np0005559875.novalocal dracut[1305]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 15 09:56:27 np0005559875.novalocal dracut[1305]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 15 09:56:27 np0005559875.novalocal dracut[1305]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 15 09:56:27 np0005559875.novalocal dracut[1305]: memstrack is not available
Dec 15 09:56:27 np0005559875.novalocal dracut[1305]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 15 09:56:27 np0005559875.novalocal dracut[1305]: *** Including module: systemd ***
Dec 15 09:56:27 np0005559875.novalocal dracut[1305]: *** Including module: fips ***
Dec 15 09:56:27 np0005559875.novalocal dracut[1305]: *** Including module: systemd-initrd ***
Dec 15 09:56:27 np0005559875.novalocal dracut[1305]: *** Including module: i18n ***
Dec 15 09:56:28 np0005559875.novalocal dracut[1305]: *** Including module: drm ***
Dec 15 09:56:28 np0005559875.novalocal dracut[1305]: *** Including module: prefixdevname ***
Dec 15 09:56:28 np0005559875.novalocal dracut[1305]: *** Including module: kernel-modules ***
Dec 15 09:56:28 np0005559875.novalocal kernel: block vda: the capability attribute has been deprecated.
Dec 15 09:56:29 np0005559875.novalocal dracut[1305]: *** Including module: kernel-modules-extra ***
Dec 15 09:56:29 np0005559875.novalocal dracut[1305]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Dec 15 09:56:29 np0005559875.novalocal dracut[1305]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Dec 15 09:56:29 np0005559875.novalocal dracut[1305]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Dec 15 09:56:29 np0005559875.novalocal dracut[1305]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Dec 15 09:56:29 np0005559875.novalocal dracut[1305]: *** Including module: qemu ***
Dec 15 09:56:29 np0005559875.novalocal dracut[1305]: *** Including module: fstab-sys ***
Dec 15 09:56:29 np0005559875.novalocal dracut[1305]: *** Including module: rootfs-block ***
Dec 15 09:56:29 np0005559875.novalocal dracut[1305]: *** Including module: terminfo ***
Dec 15 09:56:29 np0005559875.novalocal dracut[1305]: *** Including module: udev-rules ***
Dec 15 09:56:29 np0005559875.novalocal dracut[1305]: Skipping udev rule: 91-permissions.rules
Dec 15 09:56:29 np0005559875.novalocal dracut[1305]: Skipping udev rule: 80-drivers-modprobe.rules
Dec 15 09:56:29 np0005559875.novalocal dracut[1305]: *** Including module: virtiofs ***
Dec 15 09:56:29 np0005559875.novalocal dracut[1305]: *** Including module: dracut-systemd ***
Dec 15 09:56:30 np0005559875.novalocal dracut[1305]: *** Including module: usrmount ***
Dec 15 09:56:30 np0005559875.novalocal dracut[1305]: *** Including module: base ***
Dec 15 09:56:30 np0005559875.novalocal dracut[1305]: *** Including module: fs-lib ***
Dec 15 09:56:30 np0005559875.novalocal dracut[1305]: *** Including module: kdumpbase ***
Dec 15 09:56:30 np0005559875.novalocal dracut[1305]: *** Including module: microcode_ctl-fw_dir_override ***
Dec 15 09:56:30 np0005559875.novalocal dracut[1305]:   microcode_ctl module: mangling fw_dir
Dec 15 09:56:30 np0005559875.novalocal dracut[1305]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec 15 09:56:30 np0005559875.novalocal dracut[1305]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec 15 09:56:30 np0005559875.novalocal dracut[1305]:     microcode_ctl: configuration "intel" is ignored
Dec 15 09:56:30 np0005559875.novalocal dracut[1305]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec 15 09:56:30 np0005559875.novalocal dracut[1305]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec 15 09:56:30 np0005559875.novalocal dracut[1305]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec 15 09:56:30 np0005559875.novalocal dracut[1305]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec 15 09:56:30 np0005559875.novalocal dracut[1305]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec 15 09:56:30 np0005559875.novalocal dracut[1305]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec 15 09:56:30 np0005559875.novalocal dracut[1305]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec 15 09:56:30 np0005559875.novalocal dracut[1305]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Dec 15 09:56:30 np0005559875.novalocal dracut[1305]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec 15 09:56:30 np0005559875.novalocal dracut[1305]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec 15 09:56:30 np0005559875.novalocal dracut[1305]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec 15 09:56:30 np0005559875.novalocal dracut[1305]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec 15 09:56:30 np0005559875.novalocal dracut[1305]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec 15 09:56:30 np0005559875.novalocal dracut[1305]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec 15 09:56:30 np0005559875.novalocal dracut[1305]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec 15 09:56:31 np0005559875.novalocal dracut[1305]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec 15 09:56:31 np0005559875.novalocal dracut[1305]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec 15 09:56:31 np0005559875.novalocal dracut[1305]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec 15 09:56:31 np0005559875.novalocal dracut[1305]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec 15 09:56:31 np0005559875.novalocal dracut[1305]: *** Including module: openssl ***
Dec 15 09:56:31 np0005559875.novalocal dracut[1305]: *** Including module: shutdown ***
Dec 15 09:56:31 np0005559875.novalocal dracut[1305]: *** Including module: squash ***
Dec 15 09:56:31 np0005559875.novalocal dracut[1305]: *** Including modules done ***
Dec 15 09:56:31 np0005559875.novalocal dracut[1305]: *** Installing kernel module dependencies ***
Dec 15 09:56:32 np0005559875.novalocal dracut[1305]: *** Installing kernel module dependencies done ***
Dec 15 09:56:32 np0005559875.novalocal dracut[1305]: *** Resolving executable dependencies ***
Dec 15 09:56:33 np0005559875.novalocal dracut[1305]: *** Resolving executable dependencies done ***
Dec 15 09:56:33 np0005559875.novalocal dracut[1305]: *** Generating early-microcode cpio image ***
Dec 15 09:56:33 np0005559875.novalocal dracut[1305]: *** Store current command line parameters ***
Dec 15 09:56:33 np0005559875.novalocal dracut[1305]: Stored kernel commandline:
Dec 15 09:56:33 np0005559875.novalocal dracut[1305]: No dracut internal kernel commandline stored in the initramfs
Dec 15 09:56:33 np0005559875.novalocal dracut[1305]: *** Install squash loader ***
Dec 15 09:56:34 np0005559875.novalocal dracut[1305]: *** Squashing the files inside the initramfs ***
Dec 15 09:56:35 np0005559875.novalocal dracut[1305]: *** Squashing the files inside the initramfs done ***
Dec 15 09:56:35 np0005559875.novalocal dracut[1305]: *** Creating image file '/boot/initramfs-5.14.0-648.el9.x86_64kdump.img' ***
Dec 15 09:56:35 np0005559875.novalocal dracut[1305]: *** Hardlinking files ***
Dec 15 09:56:35 np0005559875.novalocal dracut[1305]: Mode:           real
Dec 15 09:56:35 np0005559875.novalocal dracut[1305]: Files:          50
Dec 15 09:56:35 np0005559875.novalocal dracut[1305]: Linked:         0 files
Dec 15 09:56:35 np0005559875.novalocal dracut[1305]: Compared:       0 xattrs
Dec 15 09:56:35 np0005559875.novalocal dracut[1305]: Compared:       0 files
Dec 15 09:56:35 np0005559875.novalocal dracut[1305]: Saved:          0 B
Dec 15 09:56:35 np0005559875.novalocal dracut[1305]: Duration:       0.000624 seconds
Dec 15 09:56:35 np0005559875.novalocal dracut[1305]: *** Hardlinking files done ***
Dec 15 09:56:35 np0005559875.novalocal dracut[1305]: *** Creating initramfs image file '/boot/initramfs-5.14.0-648.el9.x86_64kdump.img' done ***
Dec 15 09:56:36 np0005559875.novalocal kdumpctl[1019]: kdump: kexec: loaded kdump kernel
Dec 15 09:56:36 np0005559875.novalocal kdumpctl[1019]: kdump: Starting kdump: [OK]
Dec 15 09:56:36 np0005559875.novalocal systemd[1]: Finished Crash recovery kernel arming.
Dec 15 09:56:36 np0005559875.novalocal systemd[1]: Startup finished in 1.631s (kernel) + 2.358s (initrd) + 25.439s (userspace) = 29.429s.
Dec 15 09:56:44 np0005559875.novalocal sshd-session[4298]: Accepted publickey for zuul from 38.102.83.114 port 59346 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Dec 15 09:56:44 np0005559875.novalocal systemd[1]: Created slice User Slice of UID 1000.
Dec 15 09:56:44 np0005559875.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec 15 09:56:44 np0005559875.novalocal systemd-logind[797]: New session 1 of user zuul.
Dec 15 09:56:44 np0005559875.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec 15 09:56:44 np0005559875.novalocal systemd[1]: Starting User Manager for UID 1000...
Dec 15 09:56:44 np0005559875.novalocal systemd[4302]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 15 09:56:44 np0005559875.novalocal systemd[4302]: Queued start job for default target Main User Target.
Dec 15 09:56:44 np0005559875.novalocal systemd[4302]: Created slice User Application Slice.
Dec 15 09:56:44 np0005559875.novalocal systemd[4302]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 15 09:56:44 np0005559875.novalocal systemd[4302]: Started Daily Cleanup of User's Temporary Directories.
Dec 15 09:56:44 np0005559875.novalocal systemd[4302]: Reached target Paths.
Dec 15 09:56:44 np0005559875.novalocal systemd[4302]: Reached target Timers.
Dec 15 09:56:44 np0005559875.novalocal systemd[4302]: Starting D-Bus User Message Bus Socket...
Dec 15 09:56:44 np0005559875.novalocal systemd[4302]: Starting Create User's Volatile Files and Directories...
Dec 15 09:56:44 np0005559875.novalocal systemd[4302]: Finished Create User's Volatile Files and Directories.
Dec 15 09:56:44 np0005559875.novalocal systemd[4302]: Listening on D-Bus User Message Bus Socket.
Dec 15 09:56:44 np0005559875.novalocal systemd[4302]: Reached target Sockets.
Dec 15 09:56:44 np0005559875.novalocal systemd[4302]: Reached target Basic System.
Dec 15 09:56:44 np0005559875.novalocal systemd[4302]: Reached target Main User Target.
Dec 15 09:56:44 np0005559875.novalocal systemd[4302]: Startup finished in 105ms.
Dec 15 09:56:44 np0005559875.novalocal systemd[1]: Started User Manager for UID 1000.
Dec 15 09:56:44 np0005559875.novalocal systemd[1]: Started Session 1 of User zuul.
Dec 15 09:56:44 np0005559875.novalocal sshd-session[4298]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 15 09:56:44 np0005559875.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 15 09:56:44 np0005559875.novalocal python3[4386]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 15 09:56:47 np0005559875.novalocal python3[4414]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 15 09:56:56 np0005559875.novalocal python3[4472]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 15 09:56:57 np0005559875.novalocal python3[4512]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec 15 09:56:59 np0005559875.novalocal python3[4538]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtJe8TKamB5mf0O6Vck9PKEdSPVucpj3qou4IdO1mxTm0S+wWkFZpGhZwb69L2YPK+LvHadDDqaP5Nu0t3c20s1YUCSYIac2+iuO8e/QIZWEqGmWcU4YA2HXBdHfF01Rs+d4w5m54SzAdXBu5BdGmPoUFZfpo/dfL83ySWW7c8tWFgFa8pfKiGM0NqE7RuNToM7DQYBQR6PFuLLfeZSz03/Y63sluqU4/km4Ch+zn6i/6eN8sQ4M2CWtcfHoIO+JPWHH6p3B705ZYszjsQd4qIZA5Z5v7uxachFbwl02nmG4kUDd0YitMyOXlhbFkhHS8OuO42oBxmvkHrslmInSm1+HmbZHvTky1Q0hRzUSdfRSw1FK/aAB1QxphvPUzm2QjubNifiQ8tJ80M4ROY99IKuLqij8eyKKzDK4Zbs9FjFUrOYkkDJWEB/NXXElUheWwo8+PvvedQD9PgLugyAzkr8QF5rerCz9nm96BPmqmKlkul9qRvS0GuuR4/DOIrLTU= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 09:56:59 np0005559875.novalocal python3[4562]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 09:57:00 np0005559875.novalocal python3[4661]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 15 09:57:00 np0005559875.novalocal python3[4732]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765792620.292479-251-124475748615023/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=3ad2404564fc42cab49c58d6a6cb1b26_id_rsa follow=False checksum=995dbf9d21c9cbaf1593e9edc2a620d11d0df62b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 09:57:01 np0005559875.novalocal python3[4855]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 15 09:57:01 np0005559875.novalocal python3[4926]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765792621.2878692-306-150149226093409/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=3ad2404564fc42cab49c58d6a6cb1b26_id_rsa.pub follow=False checksum=9c06e7f3af553ceac7421cfd4c79671f9f76f8b7 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 09:57:04 np0005559875.novalocal python3[4974]: ansible-ping Invoked with data=pong
Dec 15 09:57:05 np0005559875.novalocal python3[4998]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 15 09:57:08 np0005559875.novalocal python3[5056]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec 15 09:57:09 np0005559875.novalocal python3[5088]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 09:57:10 np0005559875.novalocal python3[5112]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 09:57:10 np0005559875.novalocal python3[5136]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 09:57:10 np0005559875.novalocal python3[5160]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 09:57:11 np0005559875.novalocal python3[5184]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 09:57:11 np0005559875.novalocal python3[5208]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 09:57:13 np0005559875.novalocal sudo[5232]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcvbovgkbetlzhemvwhbgjdqnlvbqikf ; /usr/bin/python3'
Dec 15 09:57:13 np0005559875.novalocal sudo[5232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 09:57:13 np0005559875.novalocal python3[5234]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 09:57:13 np0005559875.novalocal sudo[5232]: pam_unix(sudo:session): session closed for user root
Dec 15 09:57:13 np0005559875.novalocal sudo[5310]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oanglgriooqbuinfqydpqovhzihbkdqx ; /usr/bin/python3'
Dec 15 09:57:13 np0005559875.novalocal sudo[5310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 09:57:13 np0005559875.novalocal python3[5312]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 15 09:57:13 np0005559875.novalocal sudo[5310]: pam_unix(sudo:session): session closed for user root
Dec 15 09:57:14 np0005559875.novalocal sudo[5383]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioaekuaqenbkweoauyxbbebnkdlpnbli ; /usr/bin/python3'
Dec 15 09:57:14 np0005559875.novalocal sudo[5383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 09:57:14 np0005559875.novalocal python3[5385]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765792633.3611898-31-238284727842974/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 09:57:14 np0005559875.novalocal sudo[5383]: pam_unix(sudo:session): session closed for user root
Dec 15 09:57:14 np0005559875.novalocal python3[5433]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 09:57:15 np0005559875.novalocal python3[5457]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 09:57:15 np0005559875.novalocal python3[5481]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 09:57:15 np0005559875.novalocal python3[5505]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 09:57:16 np0005559875.novalocal python3[5529]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 09:57:16 np0005559875.novalocal python3[5553]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 09:57:16 np0005559875.novalocal python3[5577]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 09:57:16 np0005559875.novalocal python3[5601]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 09:57:17 np0005559875.novalocal python3[5625]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 09:57:17 np0005559875.novalocal python3[5649]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 09:57:17 np0005559875.novalocal python3[5673]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 09:57:18 np0005559875.novalocal python3[5697]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 09:57:18 np0005559875.novalocal python3[5721]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 09:57:18 np0005559875.novalocal python3[5745]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 09:57:18 np0005559875.novalocal python3[5769]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 09:57:19 np0005559875.novalocal python3[5793]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 09:57:19 np0005559875.novalocal python3[5817]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 09:57:19 np0005559875.novalocal python3[5841]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 09:57:20 np0005559875.novalocal python3[5865]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 09:57:20 np0005559875.novalocal python3[5889]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 09:57:20 np0005559875.novalocal python3[5913]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 09:57:21 np0005559875.novalocal python3[5937]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 09:57:21 np0005559875.novalocal python3[5961]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 09:57:21 np0005559875.novalocal python3[5985]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 09:57:21 np0005559875.novalocal python3[6009]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 09:57:22 np0005559875.novalocal python3[6033]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 09:57:25 np0005559875.novalocal sudo[6057]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osgcqjgvmgqpvwattxfpyrsuqczqxkzk ; /usr/bin/python3'
Dec 15 09:57:25 np0005559875.novalocal sudo[6057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 09:57:25 np0005559875.novalocal python3[6059]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 15 09:57:25 np0005559875.novalocal systemd[1]: Starting Time & Date Service...
Dec 15 09:57:25 np0005559875.novalocal systemd[1]: Started Time & Date Service.
Dec 15 09:57:25 np0005559875.novalocal systemd-timedated[6061]: Changed time zone to 'UTC' (UTC).
Dec 15 09:57:25 np0005559875.novalocal sudo[6057]: pam_unix(sudo:session): session closed for user root
Dec 15 09:57:25 np0005559875.novalocal sudo[6088]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvlfmvpyigcqmfjgfzzivaujvdehsddu ; /usr/bin/python3'
Dec 15 09:57:25 np0005559875.novalocal sudo[6088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 09:57:25 np0005559875.novalocal python3[6090]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 09:57:25 np0005559875.novalocal sudo[6088]: pam_unix(sudo:session): session closed for user root
Dec 15 09:57:26 np0005559875.novalocal python3[6166]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 15 09:57:26 np0005559875.novalocal python3[6237]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1765792646.0823526-251-191425804341956/source _original_basename=tmpqegh7h1y follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 09:57:27 np0005559875.novalocal python3[6337]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 15 09:57:27 np0005559875.novalocal python3[6408]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765792646.9390368-301-37984388628686/source _original_basename=tmp1s5feyot follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 09:57:28 np0005559875.novalocal sudo[6508]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olbfxqzentloqxklzwcxviwmzumxeuke ; /usr/bin/python3'
Dec 15 09:57:28 np0005559875.novalocal sudo[6508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 09:57:28 np0005559875.novalocal python3[6510]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 15 09:57:28 np0005559875.novalocal sudo[6508]: pam_unix(sudo:session): session closed for user root
Dec 15 09:57:28 np0005559875.novalocal sudo[6581]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgxnenagddhssliciuxfmwktyccsklyi ; /usr/bin/python3'
Dec 15 09:57:28 np0005559875.novalocal sudo[6581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 09:57:28 np0005559875.novalocal python3[6583]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765792648.1918504-381-273349247455526/source _original_basename=tmpj_tqpw_x follow=False checksum=19d309ebea5b58181725fc1dc4cea95ea4d18865 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 09:57:28 np0005559875.novalocal sudo[6581]: pam_unix(sudo:session): session closed for user root
Dec 15 09:57:29 np0005559875.novalocal python3[6631]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 09:57:29 np0005559875.novalocal python3[6657]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 09:57:29 np0005559875.novalocal sudo[6735]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbppwggbwvuctmkhwntulwvahpzmbodt ; /usr/bin/python3'
Dec 15 09:57:29 np0005559875.novalocal sudo[6735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 09:57:30 np0005559875.novalocal python3[6737]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 15 09:57:30 np0005559875.novalocal sudo[6735]: pam_unix(sudo:session): session closed for user root
Dec 15 09:57:30 np0005559875.novalocal sudo[6808]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pacjukaulixrfqphmozkkyokdqbblruf ; /usr/bin/python3'
Dec 15 09:57:30 np0005559875.novalocal sudo[6808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 09:57:30 np0005559875.novalocal python3[6810]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1765792649.85647-451-177912806798998/source _original_basename=tmpx0m3qdzi follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 09:57:30 np0005559875.novalocal sudo[6808]: pam_unix(sudo:session): session closed for user root
Dec 15 09:57:31 np0005559875.novalocal sudo[6859]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vklhoegiqybejvbwfxzkypzkzkwpuaua ; /usr/bin/python3'
Dec 15 09:57:31 np0005559875.novalocal sudo[6859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 09:57:31 np0005559875.novalocal python3[6861]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-d4b5-3d8e-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 09:57:31 np0005559875.novalocal sudo[6859]: pam_unix(sudo:session): session closed for user root
Dec 15 09:57:31 np0005559875.novalocal python3[6889]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-d4b5-3d8e-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec 15 09:57:33 np0005559875.novalocal python3[6918]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 09:57:55 np0005559875.novalocal sudo[6942]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dweysfqacumplgktqygeooefzdxephol ; /usr/bin/python3'
Dec 15 09:57:55 np0005559875.novalocal sudo[6942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 09:57:55 np0005559875.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 15 09:57:55 np0005559875.novalocal python3[6944]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 09:57:55 np0005559875.novalocal sudo[6942]: pam_unix(sudo:session): session closed for user root
Dec 15 09:58:36 np0005559875.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 15 09:58:36 np0005559875.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec 15 09:58:36 np0005559875.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec 15 09:58:36 np0005559875.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec 15 09:58:36 np0005559875.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec 15 09:58:36 np0005559875.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec 15 09:58:36 np0005559875.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec 15 09:58:36 np0005559875.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec 15 09:58:36 np0005559875.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec 15 09:58:36 np0005559875.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec 15 09:58:36 np0005559875.novalocal NetworkManager[858]: <info>  [1765792716.9544] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 15 09:58:36 np0005559875.novalocal systemd-udevd[6947]: Network interface NamePolicy= disabled on kernel command line.
Dec 15 09:58:36 np0005559875.novalocal NetworkManager[858]: <info>  [1765792716.9742] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 15 09:58:36 np0005559875.novalocal NetworkManager[858]: <info>  [1765792716.9776] settings: (eth1): created default wired connection 'Wired connection 1'
Dec 15 09:58:36 np0005559875.novalocal NetworkManager[858]: <info>  [1765792716.9782] device (eth1): carrier: link connected
Dec 15 09:58:36 np0005559875.novalocal NetworkManager[858]: <info>  [1765792716.9786] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 15 09:58:36 np0005559875.novalocal NetworkManager[858]: <info>  [1765792716.9795] policy: auto-activating connection 'Wired connection 1' (2e657e63-4775-3f98-95ab-5b1da731b772)
Dec 15 09:58:36 np0005559875.novalocal NetworkManager[858]: <info>  [1765792716.9800] device (eth1): Activation: starting connection 'Wired connection 1' (2e657e63-4775-3f98-95ab-5b1da731b772)
Dec 15 09:58:36 np0005559875.novalocal NetworkManager[858]: <info>  [1765792716.9802] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 15 09:58:36 np0005559875.novalocal NetworkManager[858]: <info>  [1765792716.9809] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 15 09:58:36 np0005559875.novalocal NetworkManager[858]: <info>  [1765792716.9816] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 15 09:58:36 np0005559875.novalocal NetworkManager[858]: <info>  [1765792716.9823] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 15 09:58:37 np0005559875.novalocal python3[6974]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-5ebe-959b-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 09:58:47 np0005559875.novalocal sudo[7052]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofadyopkqfucmbuvpkkoxsmpcsewlcjr ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 15 09:58:47 np0005559875.novalocal sudo[7052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 09:58:47 np0005559875.novalocal python3[7054]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 15 09:58:47 np0005559875.novalocal sudo[7052]: pam_unix(sudo:session): session closed for user root
Dec 15 09:58:48 np0005559875.novalocal sudo[7125]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhgyibrsxxhuxcqbyophudfydoktokdv ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 15 09:58:48 np0005559875.novalocal sudo[7125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 09:58:48 np0005559875.novalocal python3[7127]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765792727.5533483-104-21971153412082/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=5fdf1252117e9ed7aa620580b5d4612d54a3f74f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 09:58:48 np0005559875.novalocal sudo[7125]: pam_unix(sudo:session): session closed for user root
Dec 15 09:58:48 np0005559875.novalocal sudo[7175]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smajxpxnsbnsajtdrtojbyvndstiijqg ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 15 09:58:48 np0005559875.novalocal sudo[7175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 09:58:49 np0005559875.novalocal python3[7177]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 15 09:58:49 np0005559875.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 15 09:58:49 np0005559875.novalocal systemd[1]: Stopped Network Manager Wait Online.
Dec 15 09:58:49 np0005559875.novalocal systemd[1]: Stopping Network Manager Wait Online...
Dec 15 09:58:49 np0005559875.novalocal systemd[1]: Stopping Network Manager...
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[858]: <info>  [1765792729.0894] caught SIGTERM, shutting down normally.
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[858]: <info>  [1765792729.0908] dhcp4 (eth0): canceled DHCP transaction
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[858]: <info>  [1765792729.0908] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[858]: <info>  [1765792729.0909] dhcp4 (eth0): state changed no lease
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[858]: <info>  [1765792729.0912] manager: NetworkManager state is now CONNECTING
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[858]: <info>  [1765792729.1012] dhcp4 (eth1): canceled DHCP transaction
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[858]: <info>  [1765792729.1014] dhcp4 (eth1): state changed no lease
Dec 15 09:58:49 np0005559875.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[858]: <info>  [1765792729.1087] exiting (success)
Dec 15 09:58:49 np0005559875.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 15 09:58:49 np0005559875.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 15 09:58:49 np0005559875.novalocal systemd[1]: Stopped Network Manager.
Dec 15 09:58:49 np0005559875.novalocal systemd[1]: NetworkManager.service: Consumed 1.122s CPU time, 9.9M memory peak.
Dec 15 09:58:49 np0005559875.novalocal systemd[1]: Starting Network Manager...
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.1537] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:f0a48d23-f548-4261-85f3-3468dc8c15f7)
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.1540] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.1591] manager[0x55e942c74000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 15 09:58:49 np0005559875.novalocal systemd[1]: Starting Hostname Service...
Dec 15 09:58:49 np0005559875.novalocal systemd[1]: Started Hostname Service.
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2308] hostname: hostname: using hostnamed
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2309] hostname: static hostname changed from (none) to "np0005559875.novalocal"
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2315] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2320] manager[0x55e942c74000]: rfkill: Wi-Fi hardware radio set enabled
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2321] manager[0x55e942c74000]: rfkill: WWAN hardware radio set enabled
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2348] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2348] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2349] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2349] manager: Networking is enabled by state file
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2351] settings: Loaded settings plugin: keyfile (internal)
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2355] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2377] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2385] dhcp: init: Using DHCP client 'internal'
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2387] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2392] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2396] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2402] device (lo): Activation: starting connection 'lo' (e64a39bd-9875-4e86-a1ed-975879eaa15a)
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2408] device (eth0): carrier: link connected
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2411] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2415] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2415] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2420] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2425] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2431] device (eth1): carrier: link connected
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2436] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2441] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (2e657e63-4775-3f98-95ab-5b1da731b772) (indicated)
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2441] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2448] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2453] device (eth1): Activation: starting connection 'Wired connection 1' (2e657e63-4775-3f98-95ab-5b1da731b772)
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2461] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 15 09:58:49 np0005559875.novalocal systemd[1]: Started Network Manager.
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2465] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2467] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2470] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2472] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2475] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2477] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2479] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2483] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2516] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2522] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2534] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2538] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2554] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2559] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.2564] device (lo): Activation: successful, device activated.
Dec 15 09:58:49 np0005559875.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 15 09:58:49 np0005559875.novalocal sudo[7175]: pam_unix(sudo:session): session closed for user root
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.5124] dhcp4 (eth0): state changed new lease, address=38.102.83.5
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.5135] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.5211] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.5257] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.5259] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.5262] manager: NetworkManager state is now CONNECTED_SITE
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.5264] device (eth0): Activation: successful, device activated.
Dec 15 09:58:49 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792729.5268] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 15 09:58:49 np0005559875.novalocal python3[7242]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-5ebe-959b-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 09:58:59 np0005559875.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 15 09:59:13 np0005559875.novalocal systemd[4302]: Starting Mark boot as successful...
Dec 15 09:59:13 np0005559875.novalocal systemd[4302]: Finished Mark boot as successful.
Dec 15 09:59:19 np0005559875.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 15 09:59:34 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792774.2411] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 15 09:59:34 np0005559875.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 15 09:59:34 np0005559875.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 15 09:59:34 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792774.2658] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 15 09:59:34 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792774.2659] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 15 09:59:34 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792774.2663] device (eth1): Activation: successful, device activated.
Dec 15 09:59:34 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792774.2668] manager: startup complete
Dec 15 09:59:34 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792774.2670] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec 15 09:59:34 np0005559875.novalocal NetworkManager[7187]: <warn>  [1765792774.2674] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec 15 09:59:34 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792774.2680] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec 15 09:59:34 np0005559875.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 15 09:59:34 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792774.2822] dhcp4 (eth1): canceled DHCP transaction
Dec 15 09:59:34 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792774.2823] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 15 09:59:34 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792774.2823] dhcp4 (eth1): state changed no lease
Dec 15 09:59:34 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792774.2836] policy: auto-activating connection 'ci-private-network' (b505dc75-3963-5da7-bfe2-a0606373c56e)
Dec 15 09:59:34 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792774.2840] device (eth1): Activation: starting connection 'ci-private-network' (b505dc75-3963-5da7-bfe2-a0606373c56e)
Dec 15 09:59:34 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792774.2841] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 15 09:59:34 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792774.2843] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 15 09:59:34 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792774.2848] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 15 09:59:34 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792774.2855] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 15 09:59:34 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792774.2892] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 15 09:59:34 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792774.2893] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 15 09:59:34 np0005559875.novalocal NetworkManager[7187]: <info>  [1765792774.2897] device (eth1): Activation: successful, device activated.
Dec 15 09:59:44 np0005559875.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 15 09:59:49 np0005559875.novalocal sshd-session[4311]: Received disconnect from 38.102.83.114 port 59346:11: disconnected by user
Dec 15 09:59:49 np0005559875.novalocal sshd-session[4311]: Disconnected from user zuul 38.102.83.114 port 59346
Dec 15 09:59:49 np0005559875.novalocal sshd-session[4298]: pam_unix(sshd:session): session closed for user zuul
Dec 15 09:59:49 np0005559875.novalocal systemd-logind[797]: Session 1 logged out. Waiting for processes to exit.
Dec 15 10:01:00 np0005559875.novalocal sshd-session[7290]: Accepted publickey for zuul from 38.102.83.114 port 42364 ssh2: RSA SHA256:oZqduNAfzNWmRCVtuNqNdfr90suxrjNE8fVduO6X/mo
Dec 15 10:01:00 np0005559875.novalocal systemd-logind[797]: New session 3 of user zuul.
Dec 15 10:01:00 np0005559875.novalocal systemd[1]: Started Session 3 of User zuul.
Dec 15 10:01:00 np0005559875.novalocal sshd-session[7290]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 15 10:01:00 np0005559875.novalocal sudo[7369]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wklvqojrcfncgioijwshxgzrgxtmpghn ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 15 10:01:00 np0005559875.novalocal sudo[7369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:01:00 np0005559875.novalocal python3[7371]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 15 10:01:00 np0005559875.novalocal sudo[7369]: pam_unix(sudo:session): session closed for user root
Dec 15 10:01:00 np0005559875.novalocal sudo[7442]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyryvujludwabszgiunxnqxwhwpitmxh ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 15 10:01:00 np0005559875.novalocal sudo[7442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:01:01 np0005559875.novalocal python3[7444]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765792860.4950137-373-121413158093967/source _original_basename=tmpixbe25cn follow=False checksum=b807fc223714b54466e6e143bdf31c0b4c5b9b7b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:01:01 np0005559875.novalocal sudo[7442]: pam_unix(sudo:session): session closed for user root
Dec 15 10:01:01 np0005559875.novalocal CROND[7470]: (root) CMD (run-parts /etc/cron.hourly)
Dec 15 10:01:01 np0005559875.novalocal run-parts[7473]: (/etc/cron.hourly) starting 0anacron
Dec 15 10:01:01 np0005559875.novalocal anacron[7481]: Anacron started on 2025-12-15
Dec 15 10:01:01 np0005559875.novalocal anacron[7481]: Will run job `cron.daily' in 11 min.
Dec 15 10:01:01 np0005559875.novalocal anacron[7481]: Will run job `cron.weekly' in 31 min.
Dec 15 10:01:01 np0005559875.novalocal anacron[7481]: Will run job `cron.monthly' in 51 min.
Dec 15 10:01:01 np0005559875.novalocal anacron[7481]: Jobs will be executed sequentially
Dec 15 10:01:01 np0005559875.novalocal run-parts[7483]: (/etc/cron.hourly) finished 0anacron
Dec 15 10:01:01 np0005559875.novalocal CROND[7469]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 15 10:01:05 np0005559875.novalocal sshd-session[7293]: Connection closed by 38.102.83.114 port 42364
Dec 15 10:01:05 np0005559875.novalocal sshd-session[7290]: pam_unix(sshd:session): session closed for user zuul
Dec 15 10:01:05 np0005559875.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Dec 15 10:01:05 np0005559875.novalocal systemd-logind[797]: Session 3 logged out. Waiting for processes to exit.
Dec 15 10:01:05 np0005559875.novalocal systemd-logind[797]: Removed session 3.
Dec 15 10:02:13 np0005559875.novalocal systemd[4302]: Created slice User Background Tasks Slice.
Dec 15 10:02:13 np0005559875.novalocal systemd[4302]: Starting Cleanup of User's Temporary Files and Directories...
Dec 15 10:02:13 np0005559875.novalocal systemd[4302]: Finished Cleanup of User's Temporary Files and Directories.
Dec 15 10:03:38 np0005559875.novalocal sshd-session[7487]: Connection closed by 120.157.59.86 port 33758
Dec 15 10:03:39 np0005559875.novalocal sshd-session[7488]: Invalid user a from 120.157.59.86 port 33768
Dec 15 10:03:39 np0005559875.novalocal sshd-session[7488]: Connection closed by invalid user a 120.157.59.86 port 33768 [preauth]
Dec 15 10:06:17 np0005559875.novalocal sshd-session[7491]: Accepted publickey for zuul from 38.102.83.114 port 46338 ssh2: RSA SHA256:oZqduNAfzNWmRCVtuNqNdfr90suxrjNE8fVduO6X/mo
Dec 15 10:06:17 np0005559875.novalocal systemd-logind[797]: New session 4 of user zuul.
Dec 15 10:06:17 np0005559875.novalocal systemd[1]: Started Session 4 of User zuul.
Dec 15 10:06:17 np0005559875.novalocal sshd-session[7491]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 15 10:06:17 np0005559875.novalocal sudo[7518]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wunctdrdqhnyfywydstjxgwvehrayxac ; /usr/bin/python3'
Dec 15 10:06:17 np0005559875.novalocal sudo[7518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:06:17 np0005559875.novalocal python3[7520]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-5712-7989-000000001f71-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:06:17 np0005559875.novalocal sudo[7518]: pam_unix(sudo:session): session closed for user root
Dec 15 10:06:17 np0005559875.novalocal sudo[7546]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmzgujylrutavtgsjiudwqbmcvmathhx ; /usr/bin/python3'
Dec 15 10:06:17 np0005559875.novalocal sudo[7546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:06:17 np0005559875.novalocal python3[7548]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:06:17 np0005559875.novalocal sudo[7546]: pam_unix(sudo:session): session closed for user root
Dec 15 10:06:17 np0005559875.novalocal sudo[7573]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atsjzveezqawscwjgxnbpzimaqxdvift ; /usr/bin/python3'
Dec 15 10:06:17 np0005559875.novalocal sudo[7573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:06:18 np0005559875.novalocal python3[7575]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:06:18 np0005559875.novalocal sudo[7573]: pam_unix(sudo:session): session closed for user root
Dec 15 10:06:18 np0005559875.novalocal sudo[7599]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yadggthfqdtoyxumnenkmxehazfzkpwd ; /usr/bin/python3'
Dec 15 10:06:18 np0005559875.novalocal sudo[7599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:06:18 np0005559875.novalocal python3[7601]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:06:18 np0005559875.novalocal sudo[7599]: pam_unix(sudo:session): session closed for user root
Dec 15 10:06:18 np0005559875.novalocal sudo[7625]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chyqemqacmgfegclbuqpmwwwjdgrzlqz ; /usr/bin/python3'
Dec 15 10:06:18 np0005559875.novalocal sudo[7625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:06:18 np0005559875.novalocal python3[7627]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:06:18 np0005559875.novalocal sudo[7625]: pam_unix(sudo:session): session closed for user root
Dec 15 10:06:19 np0005559875.novalocal sudo[7654]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mooanfqgwuoiqxxhzhbntsignvpifoxm ; /usr/bin/python3'
Dec 15 10:06:19 np0005559875.novalocal sudo[7654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:06:19 np0005559875.novalocal python3[7656]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:06:19 np0005559875.novalocal sudo[7654]: pam_unix(sudo:session): session closed for user root
Dec 15 10:06:19 np0005559875.novalocal sudo[7732]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoabfjyjgiuwmtckxvihoatytjfsvlbt ; /usr/bin/python3'
Dec 15 10:06:19 np0005559875.novalocal sudo[7732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:06:19 np0005559875.novalocal python3[7734]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 15 10:06:19 np0005559875.novalocal sudo[7732]: pam_unix(sudo:session): session closed for user root
Dec 15 10:06:20 np0005559875.novalocal sudo[7805]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxzslwxymytbyaqusucacvyjygevvxpi ; /usr/bin/python3'
Dec 15 10:06:20 np0005559875.novalocal sudo[7805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:06:20 np0005559875.novalocal python3[7807]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765793179.4940927-521-145616332982349/source _original_basename=tmppzu39tlv follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:06:20 np0005559875.novalocal sudo[7805]: pam_unix(sudo:session): session closed for user root
Dec 15 10:06:20 np0005559875.novalocal sudo[7855]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khfxlyayxmkrcfrohnsdysylmqbhsvpf ; /usr/bin/python3'
Dec 15 10:06:20 np0005559875.novalocal sudo[7855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:06:21 np0005559875.novalocal python3[7857]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 15 10:06:21 np0005559875.novalocal systemd[1]: Reloading.
Dec 15 10:06:21 np0005559875.novalocal systemd-rc-local-generator[7880]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:06:21 np0005559875.novalocal sudo[7855]: pam_unix(sudo:session): session closed for user root
Dec 15 10:06:22 np0005559875.novalocal sudo[7912]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tebvifeonizsuklubvyetiqqugazwrts ; /usr/bin/python3'
Dec 15 10:06:22 np0005559875.novalocal sudo[7912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:06:22 np0005559875.novalocal python3[7914]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec 15 10:06:22 np0005559875.novalocal sudo[7912]: pam_unix(sudo:session): session closed for user root
Dec 15 10:06:23 np0005559875.novalocal sudo[7938]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yovpauzfzfisekgrlmdoohmvduivxwgt ; /usr/bin/python3'
Dec 15 10:06:23 np0005559875.novalocal sudo[7938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:06:23 np0005559875.novalocal python3[7940]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:06:23 np0005559875.novalocal sudo[7938]: pam_unix(sudo:session): session closed for user root
Dec 15 10:06:23 np0005559875.novalocal sudo[7966]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfkgndcvqmbtdcxjgzguiunurkcdluhv ; /usr/bin/python3'
Dec 15 10:06:23 np0005559875.novalocal sudo[7966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:06:23 np0005559875.novalocal python3[7968]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:06:23 np0005559875.novalocal sudo[7966]: pam_unix(sudo:session): session closed for user root
Dec 15 10:06:23 np0005559875.novalocal sudo[7994]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwlslmkqqmpkkvyvkqvsckbhzuhcgxuz ; /usr/bin/python3'
Dec 15 10:06:23 np0005559875.novalocal sudo[7994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:06:23 np0005559875.novalocal python3[7996]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:06:23 np0005559875.novalocal sudo[7994]: pam_unix(sudo:session): session closed for user root
Dec 15 10:06:24 np0005559875.novalocal sudo[8022]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbvolhjjshcescpprpcfjqwicjresdru ; /usr/bin/python3'
Dec 15 10:06:24 np0005559875.novalocal sudo[8022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:06:24 np0005559875.novalocal python3[8024]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:06:24 np0005559875.novalocal sudo[8022]: pam_unix(sudo:session): session closed for user root
Dec 15 10:06:25 np0005559875.novalocal python3[8051]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-5712-7989-000000001f78-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:06:25 np0005559875.novalocal python3[8081]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 15 10:06:28 np0005559875.novalocal sshd-session[7494]: Connection closed by 38.102.83.114 port 46338
Dec 15 10:06:28 np0005559875.novalocal sshd-session[7491]: pam_unix(sshd:session): session closed for user zuul
Dec 15 10:06:28 np0005559875.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Dec 15 10:06:28 np0005559875.novalocal systemd[1]: session-4.scope: Consumed 4.497s CPU time.
Dec 15 10:06:28 np0005559875.novalocal systemd-logind[797]: Session 4 logged out. Waiting for processes to exit.
Dec 15 10:06:28 np0005559875.novalocal systemd-logind[797]: Removed session 4.
Dec 15 10:06:30 np0005559875.novalocal sshd-session[8086]: Accepted publickey for zuul from 38.102.83.114 port 43900 ssh2: RSA SHA256:oZqduNAfzNWmRCVtuNqNdfr90suxrjNE8fVduO6X/mo
Dec 15 10:06:30 np0005559875.novalocal systemd-logind[797]: New session 5 of user zuul.
Dec 15 10:06:30 np0005559875.novalocal systemd[1]: Started Session 5 of User zuul.
Dec 15 10:06:30 np0005559875.novalocal sshd-session[8086]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 15 10:06:30 np0005559875.novalocal sudo[8113]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfoejwkmxcsavbgkfaavwddehjdqfwqu ; /usr/bin/python3'
Dec 15 10:06:30 np0005559875.novalocal sudo[8113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:06:30 np0005559875.novalocal python3[8115]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 15 10:06:49 np0005559875.novalocal kernel: SELinux:  Converting 384 SID table entries...
Dec 15 10:06:49 np0005559875.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 15 10:06:49 np0005559875.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 15 10:06:49 np0005559875.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 15 10:06:49 np0005559875.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 15 10:06:49 np0005559875.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 15 10:06:49 np0005559875.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 15 10:06:49 np0005559875.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 15 10:07:01 np0005559875.novalocal kernel: SELinux:  Converting 384 SID table entries...
Dec 15 10:07:01 np0005559875.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 15 10:07:01 np0005559875.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 15 10:07:01 np0005559875.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 15 10:07:01 np0005559875.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 15 10:07:01 np0005559875.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 15 10:07:01 np0005559875.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 15 10:07:01 np0005559875.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 15 10:07:11 np0005559875.novalocal kernel: SELinux:  Converting 384 SID table entries...
Dec 15 10:07:11 np0005559875.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 15 10:07:11 np0005559875.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 15 10:07:11 np0005559875.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 15 10:07:11 np0005559875.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 15 10:07:11 np0005559875.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 15 10:07:11 np0005559875.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 15 10:07:11 np0005559875.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 15 10:07:12 np0005559875.novalocal setsebool[8181]: The virt_use_nfs policy boolean was changed to 1 by root
Dec 15 10:07:12 np0005559875.novalocal setsebool[8181]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec 15 10:07:25 np0005559875.novalocal kernel: SELinux:  Converting 387 SID table entries...
Dec 15 10:07:25 np0005559875.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 15 10:07:25 np0005559875.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 15 10:07:25 np0005559875.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 15 10:07:25 np0005559875.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 15 10:07:25 np0005559875.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 15 10:07:25 np0005559875.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 15 10:07:25 np0005559875.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 15 10:07:44 np0005559875.novalocal dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 15 10:07:44 np0005559875.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 15 10:07:44 np0005559875.novalocal systemd[1]: Starting man-db-cache-update.service...
Dec 15 10:07:44 np0005559875.novalocal systemd[1]: Reloading.
Dec 15 10:07:44 np0005559875.novalocal systemd-rc-local-generator[8934]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:07:45 np0005559875.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Dec 15 10:07:46 np0005559875.novalocal sudo[8113]: pam_unix(sudo:session): session closed for user root
Dec 15 10:07:47 np0005559875.novalocal python3[10347]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163efc-24cc-c402-0a25-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:07:48 np0005559875.novalocal kernel: evm: overlay not supported
Dec 15 10:07:48 np0005559875.novalocal systemd[4302]: Starting D-Bus User Message Bus...
Dec 15 10:07:48 np0005559875.novalocal dbus-broker-launch[11516]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec 15 10:07:48 np0005559875.novalocal dbus-broker-launch[11516]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec 15 10:07:48 np0005559875.novalocal systemd[4302]: Started D-Bus User Message Bus.
Dec 15 10:07:48 np0005559875.novalocal dbus-broker-lau[11516]: Ready
Dec 15 10:07:48 np0005559875.novalocal systemd[4302]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 15 10:07:48 np0005559875.novalocal systemd[4302]: Created slice Slice /user.
Dec 15 10:07:48 np0005559875.novalocal systemd[4302]: podman-11391.scope: unit configures an IP firewall, but not running as root.
Dec 15 10:07:48 np0005559875.novalocal systemd[4302]: (This warning is only shown for the first unit using IP firewalling.)
Dec 15 10:07:48 np0005559875.novalocal systemd[4302]: Started podman-11391.scope.
Dec 15 10:07:48 np0005559875.novalocal systemd[4302]: Started podman-pause-730922e6.scope.
Dec 15 10:07:49 np0005559875.novalocal sudo[12314]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yulnuyedqzuspwjapsqkrbojngmbqnrt ; /usr/bin/python3'
Dec 15 10:07:49 np0005559875.novalocal sudo[12314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:07:49 np0005559875.novalocal python3[12336]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.44:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.44:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:07:49 np0005559875.novalocal python3[12336]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec 15 10:07:49 np0005559875.novalocal sudo[12314]: pam_unix(sudo:session): session closed for user root
Dec 15 10:07:49 np0005559875.novalocal sshd-session[8089]: Connection closed by 38.102.83.114 port 43900
Dec 15 10:07:49 np0005559875.novalocal sshd-session[8086]: pam_unix(sshd:session): session closed for user zuul
Dec 15 10:07:49 np0005559875.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Dec 15 10:07:49 np0005559875.novalocal systemd[1]: session-5.scope: Consumed 1min 10.481s CPU time.
Dec 15 10:07:49 np0005559875.novalocal systemd-logind[797]: Session 5 logged out. Waiting for processes to exit.
Dec 15 10:07:49 np0005559875.novalocal systemd-logind[797]: Removed session 5.
Dec 15 10:08:13 np0005559875.novalocal irqbalance[793]: Cannot change IRQ 27 affinity: Operation not permitted
Dec 15 10:08:13 np0005559875.novalocal irqbalance[793]: IRQ 27 affinity is now unmanaged
Dec 15 10:08:13 np0005559875.novalocal sshd-session[22548]: Unable to negotiate with 38.102.83.199 port 47406: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 15 10:08:13 np0005559875.novalocal sshd-session[22550]: Unable to negotiate with 38.102.83.199 port 47384: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 15 10:08:13 np0005559875.novalocal sshd-session[22554]: Unable to negotiate with 38.102.83.199 port 47396: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 15 10:08:13 np0005559875.novalocal sshd-session[22556]: Connection closed by 38.102.83.199 port 47376 [preauth]
Dec 15 10:08:13 np0005559875.novalocal sshd-session[22555]: Connection closed by 38.102.83.199 port 47380 [preauth]
Dec 15 10:08:19 np0005559875.novalocal sshd-session[25128]: Accepted publickey for zuul from 38.102.83.114 port 33360 ssh2: RSA SHA256:oZqduNAfzNWmRCVtuNqNdfr90suxrjNE8fVduO6X/mo
Dec 15 10:08:19 np0005559875.novalocal systemd-logind[797]: New session 6 of user zuul.
Dec 15 10:08:19 np0005559875.novalocal systemd[1]: Started Session 6 of User zuul.
Dec 15 10:08:19 np0005559875.novalocal sshd-session[25128]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 15 10:08:20 np0005559875.novalocal python3[25236]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK/Lebufwl1gDK1O2Y206vUj1abefTutWKuODcjE4SqPoAeH80dZsIzONduYF8rBhNHrstGn3SsI/bp9pAO8YCM= zuul@np0005559874.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 10:08:20 np0005559875.novalocal sudo[25403]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vipvgfzjvrsgocztsgxkdevcbekidxmy ; /usr/bin/python3'
Dec 15 10:08:20 np0005559875.novalocal sudo[25403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:08:20 np0005559875.novalocal python3[25414]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK/Lebufwl1gDK1O2Y206vUj1abefTutWKuODcjE4SqPoAeH80dZsIzONduYF8rBhNHrstGn3SsI/bp9pAO8YCM= zuul@np0005559874.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 10:08:20 np0005559875.novalocal sudo[25403]: pam_unix(sudo:session): session closed for user root
Dec 15 10:08:21 np0005559875.novalocal sudo[25899]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wosknfipcpldgmosflgmocfqyllflqee ; /usr/bin/python3'
Dec 15 10:08:21 np0005559875.novalocal sudo[25899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:08:21 np0005559875.novalocal python3[25909]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005559875.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 15 10:08:21 np0005559875.novalocal useradd[25993]: new group: name=cloud-admin, GID=1002
Dec 15 10:08:21 np0005559875.novalocal useradd[25993]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Dec 15 10:08:21 np0005559875.novalocal sudo[25899]: pam_unix(sudo:session): session closed for user root
Dec 15 10:08:21 np0005559875.novalocal sudo[26135]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzsxvcvbycjcovffcgyifwsneqiroxtq ; /usr/bin/python3'
Dec 15 10:08:21 np0005559875.novalocal sudo[26135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:08:21 np0005559875.novalocal python3[26143]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK/Lebufwl1gDK1O2Y206vUj1abefTutWKuODcjE4SqPoAeH80dZsIzONduYF8rBhNHrstGn3SsI/bp9pAO8YCM= zuul@np0005559874.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 15 10:08:21 np0005559875.novalocal sudo[26135]: pam_unix(sudo:session): session closed for user root
Dec 15 10:08:22 np0005559875.novalocal sudo[26438]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obvhbrndhubtajpbgotvrmroinvsvvbo ; /usr/bin/python3'
Dec 15 10:08:22 np0005559875.novalocal sudo[26438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:08:22 np0005559875.novalocal python3[26447]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 15 10:08:22 np0005559875.novalocal sudo[26438]: pam_unix(sudo:session): session closed for user root
Dec 15 10:08:22 np0005559875.novalocal sudo[26706]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euttamlkcnrbgihxosblbsbdhpfbftwe ; /usr/bin/python3'
Dec 15 10:08:22 np0005559875.novalocal sudo[26706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:08:22 np0005559875.novalocal python3[26714]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765793302.1837418-167-187644718410883/source _original_basename=tmpwwbevuaw follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:08:22 np0005559875.novalocal sudo[26706]: pam_unix(sudo:session): session closed for user root
Dec 15 10:08:23 np0005559875.novalocal sudo[27057]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcfzounwdnmmlfafsfbcdkigelrzmmuo ; /usr/bin/python3'
Dec 15 10:08:23 np0005559875.novalocal sudo[27057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:08:23 np0005559875.novalocal python3[27067]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec 15 10:08:23 np0005559875.novalocal systemd[1]: Starting Hostname Service...
Dec 15 10:08:23 np0005559875.novalocal systemd[1]: Started Hostname Service.
Dec 15 10:08:23 np0005559875.novalocal systemd-hostnamed[27193]: Changed pretty hostname to 'compute-0'
Dec 15 10:08:23 compute-0 systemd-hostnamed[27193]: Hostname set to <compute-0> (static)
Dec 15 10:08:23 compute-0 NetworkManager[7187]: <info>  [1765793303.9232] hostname: static hostname changed from "np0005559875.novalocal" to "compute-0"
Dec 15 10:08:23 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 15 10:08:23 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 15 10:08:24 compute-0 sudo[27057]: pam_unix(sudo:session): session closed for user root
Dec 15 10:08:25 compute-0 sshd-session[25183]: Connection closed by 38.102.83.114 port 33360
Dec 15 10:08:25 compute-0 sshd-session[25128]: pam_unix(sshd:session): session closed for user zuul
Dec 15 10:08:25 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Dec 15 10:08:25 compute-0 systemd[1]: session-6.scope: Consumed 2.275s CPU time.
Dec 15 10:08:25 compute-0 systemd-logind[797]: Session 6 logged out. Waiting for processes to exit.
Dec 15 10:08:25 compute-0 systemd-logind[797]: Removed session 6.
Dec 15 10:08:30 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 15 10:08:30 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 15 10:08:30 compute-0 systemd[1]: man-db-cache-update.service: Consumed 54.956s CPU time.
Dec 15 10:08:30 compute-0 systemd[1]: run-r1a35d90b6dfe4697a9fd8d1875ff1dd1.service: Deactivated successfully.
Dec 15 10:08:33 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 15 10:08:53 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 15 10:11:13 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Dec 15 10:11:13 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec 15 10:11:13 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Dec 15 10:11:13 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec 15 10:12:01 compute-0 anacron[7481]: Job `cron.daily' started
Dec 15 10:12:01 compute-0 anacron[7481]: Job `cron.daily' terminated
Dec 15 10:12:27 compute-0 sshd-session[29940]: Accepted publickey for zuul from 38.102.83.199 port 37694 ssh2: RSA SHA256:oZqduNAfzNWmRCVtuNqNdfr90suxrjNE8fVduO6X/mo
Dec 15 10:12:27 compute-0 systemd-logind[797]: New session 7 of user zuul.
Dec 15 10:12:27 compute-0 systemd[1]: Started Session 7 of User zuul.
Dec 15 10:12:27 compute-0 sshd-session[29940]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 15 10:12:27 compute-0 python3[30016]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 15 10:12:29 compute-0 sudo[30130]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wffhapnbvqormveenzglcctxlkitojlr ; /usr/bin/python3'
Dec 15 10:12:29 compute-0 sudo[30130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:12:29 compute-0 python3[30132]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 15 10:12:29 compute-0 sudo[30130]: pam_unix(sudo:session): session closed for user root
Dec 15 10:12:29 compute-0 sudo[30203]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znuspufpjxcfrrmrohknhyfrbzaqpljk ; /usr/bin/python3'
Dec 15 10:12:29 compute-0 sudo[30203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:12:30 compute-0 python3[30205]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765793549.2905092-33967-227183026608114/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:12:30 compute-0 sudo[30203]: pam_unix(sudo:session): session closed for user root
Dec 15 10:12:30 compute-0 sudo[30229]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bketrmsxcxkdpcmblybywmwrxclluexr ; /usr/bin/python3'
Dec 15 10:12:30 compute-0 sudo[30229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:12:30 compute-0 python3[30231]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 15 10:12:30 compute-0 sudo[30229]: pam_unix(sudo:session): session closed for user root
Dec 15 10:12:30 compute-0 sudo[30302]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyrjyhnturfbbebrosjxckclqhhtryjs ; /usr/bin/python3'
Dec 15 10:12:30 compute-0 sudo[30302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:12:30 compute-0 python3[30304]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765793549.2905092-33967-227183026608114/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:12:30 compute-0 sudo[30302]: pam_unix(sudo:session): session closed for user root
Dec 15 10:12:30 compute-0 sudo[30328]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zduyasollcilpgfyodzceoolgzmqrqka ; /usr/bin/python3'
Dec 15 10:12:30 compute-0 sudo[30328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:12:30 compute-0 python3[30330]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 15 10:12:30 compute-0 sudo[30328]: pam_unix(sudo:session): session closed for user root
Dec 15 10:12:31 compute-0 sudo[30401]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpfrdydrcjdfxcxnmtlhygftfrwkauua ; /usr/bin/python3'
Dec 15 10:12:31 compute-0 sudo[30401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:12:31 compute-0 python3[30403]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765793549.2905092-33967-227183026608114/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:12:31 compute-0 sudo[30401]: pam_unix(sudo:session): session closed for user root
Dec 15 10:12:31 compute-0 sudo[30427]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evkmeflnzzsnzceozyhcaotwzqivyqok ; /usr/bin/python3'
Dec 15 10:12:31 compute-0 sudo[30427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:12:31 compute-0 python3[30429]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 15 10:12:31 compute-0 sudo[30427]: pam_unix(sudo:session): session closed for user root
Dec 15 10:12:31 compute-0 sudo[30500]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdxullqjpmoowktudbytbqrfonzhladp ; /usr/bin/python3'
Dec 15 10:12:31 compute-0 sudo[30500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:12:31 compute-0 python3[30502]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765793549.2905092-33967-227183026608114/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:12:31 compute-0 sudo[30500]: pam_unix(sudo:session): session closed for user root
Dec 15 10:12:32 compute-0 sudo[30526]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tliwbxqtdwlayflbsftrdnchzqnnimfe ; /usr/bin/python3'
Dec 15 10:12:32 compute-0 sudo[30526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:12:32 compute-0 python3[30528]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 15 10:12:32 compute-0 sudo[30526]: pam_unix(sudo:session): session closed for user root
Dec 15 10:12:32 compute-0 sudo[30599]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfcrbggfcijhxlgnfjmhbctugryukubt ; /usr/bin/python3'
Dec 15 10:12:32 compute-0 sudo[30599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:12:32 compute-0 python3[30601]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765793549.2905092-33967-227183026608114/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:12:32 compute-0 sudo[30599]: pam_unix(sudo:session): session closed for user root
Dec 15 10:12:32 compute-0 sudo[30625]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvvrkwlyvkqspxcixeyqhxerceynadqf ; /usr/bin/python3'
Dec 15 10:12:32 compute-0 sudo[30625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:12:33 compute-0 python3[30627]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 15 10:12:33 compute-0 sudo[30625]: pam_unix(sudo:session): session closed for user root
Dec 15 10:12:33 compute-0 sudo[30698]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oahjqbqybekxhtwukhrtrfamiatdberc ; /usr/bin/python3'
Dec 15 10:12:33 compute-0 sudo[30698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:12:33 compute-0 irqbalance[793]: Cannot change IRQ 26 affinity: Operation not permitted
Dec 15 10:12:33 compute-0 irqbalance[793]: IRQ 26 affinity is now unmanaged
Dec 15 10:12:33 compute-0 python3[30700]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765793549.2905092-33967-227183026608114/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:12:33 compute-0 sudo[30698]: pam_unix(sudo:session): session closed for user root
Dec 15 10:12:33 compute-0 sudo[30724]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnqwrqoqkbpgelilafgnmdxqxjyqrcdm ; /usr/bin/python3'
Dec 15 10:12:33 compute-0 sudo[30724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:12:33 compute-0 python3[30726]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 15 10:12:33 compute-0 sudo[30724]: pam_unix(sudo:session): session closed for user root
Dec 15 10:12:33 compute-0 sudo[30797]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyyedcseytfcmzzxaksxjfdweogqvjmi ; /usr/bin/python3'
Dec 15 10:12:33 compute-0 sudo[30797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:12:33 compute-0 python3[30799]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765793549.2905092-33967-227183026608114/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:12:33 compute-0 sudo[30797]: pam_unix(sudo:session): session closed for user root
Dec 15 10:12:36 compute-0 sshd-session[30824]: Unable to negotiate with 192.168.122.11 port 42526: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 15 10:12:36 compute-0 sshd-session[30826]: Connection closed by 192.168.122.11 port 42492 [preauth]
Dec 15 10:12:36 compute-0 sshd-session[30825]: Connection closed by 192.168.122.11 port 42504 [preauth]
Dec 15 10:12:36 compute-0 sshd-session[30827]: Unable to negotiate with 192.168.122.11 port 42510: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 15 10:12:36 compute-0 sshd-session[30828]: Unable to negotiate with 192.168.122.11 port 42508: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 15 10:12:45 compute-0 python3[30857]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:17:45 compute-0 sshd-session[29943]: Received disconnect from 38.102.83.199 port 37694:11: disconnected by user
Dec 15 10:17:45 compute-0 sshd-session[29943]: Disconnected from user zuul 38.102.83.199 port 37694
Dec 15 10:17:45 compute-0 sshd-session[29940]: pam_unix(sshd:session): session closed for user zuul
Dec 15 10:17:45 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Dec 15 10:17:45 compute-0 systemd[1]: session-7.scope: Consumed 4.757s CPU time.
Dec 15 10:17:45 compute-0 systemd-logind[797]: Session 7 logged out. Waiting for processes to exit.
Dec 15 10:17:45 compute-0 systemd-logind[797]: Removed session 7.
Dec 15 10:25:22 compute-0 sshd-session[30864]: Accepted publickey for zuul from 192.168.122.30 port 57546 ssh2: ECDSA SHA256:RI7NGykFU6DY8VmZwamQwcvn1msDfj+uaMVMpAHHUfo
Dec 15 10:25:22 compute-0 systemd-logind[797]: New session 8 of user zuul.
Dec 15 10:25:22 compute-0 systemd[1]: Started Session 8 of User zuul.
Dec 15 10:25:22 compute-0 sshd-session[30864]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 15 10:25:23 compute-0 python3.9[31017]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 15 10:25:24 compute-0 sudo[31196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rphbjvnwczvgnwtjeirdzsfltdhrbyvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794324.1374028-56-273278121995765/AnsiballZ_command.py'
Dec 15 10:25:24 compute-0 sudo[31196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:25:24 compute-0 python3.9[31198]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:25:41 compute-0 sudo[31196]: pam_unix(sudo:session): session closed for user root
Dec 15 10:25:42 compute-0 sshd-session[30867]: Connection closed by 192.168.122.30 port 57546
Dec 15 10:25:42 compute-0 sshd-session[30864]: pam_unix(sshd:session): session closed for user zuul
Dec 15 10:25:42 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Dec 15 10:25:42 compute-0 systemd[1]: session-8.scope: Consumed 8.537s CPU time.
Dec 15 10:25:42 compute-0 systemd-logind[797]: Session 8 logged out. Waiting for processes to exit.
Dec 15 10:25:42 compute-0 systemd-logind[797]: Removed session 8.
Dec 15 10:26:03 compute-0 sshd-session[31256]: Accepted publickey for zuul from 192.168.122.30 port 46084 ssh2: ECDSA SHA256:RI7NGykFU6DY8VmZwamQwcvn1msDfj+uaMVMpAHHUfo
Dec 15 10:26:03 compute-0 systemd-logind[797]: New session 9 of user zuul.
Dec 15 10:26:03 compute-0 systemd[1]: Started Session 9 of User zuul.
Dec 15 10:26:03 compute-0 sshd-session[31256]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 15 10:26:04 compute-0 python3.9[31409]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 15 10:26:05 compute-0 python3.9[31583]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 15 10:26:06 compute-0 sudo[31733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfrzlljkrcclrxwgybvmqmjbswkelbkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794366.3333676-93-247621862673781/AnsiballZ_command.py'
Dec 15 10:26:06 compute-0 sudo[31733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:26:06 compute-0 python3.9[31735]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:26:06 compute-0 sudo[31733]: pam_unix(sudo:session): session closed for user root
Dec 15 10:26:07 compute-0 sudo[31886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jteatfwstsrtftyjmhanjvghekhozvnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794367.3892312-129-206303351949077/AnsiballZ_stat.py'
Dec 15 10:26:07 compute-0 sudo[31886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:26:08 compute-0 python3.9[31888]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 15 10:26:08 compute-0 sudo[31886]: pam_unix(sudo:session): session closed for user root
Dec 15 10:26:08 compute-0 sudo[32038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ennwyfittfbgqaiknxmbritphfprqufo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794368.2078369-153-10086404827512/AnsiballZ_file.py'
Dec 15 10:26:08 compute-0 sudo[32038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:26:08 compute-0 python3.9[32040]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:26:08 compute-0 sudo[32038]: pam_unix(sudo:session): session closed for user root
Dec 15 10:26:09 compute-0 sudo[32190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifjvhhbiowzoxjhvcdmovykkabnxwcgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794369.1342275-177-83112354833202/AnsiballZ_stat.py'
Dec 15 10:26:09 compute-0 sudo[32190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:26:09 compute-0 python3.9[32192]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:26:09 compute-0 sudo[32190]: pam_unix(sudo:session): session closed for user root
Dec 15 10:26:10 compute-0 sudo[32313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubsrhakijxyayfvvyippccoplbbirhwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794369.1342275-177-83112354833202/AnsiballZ_copy.py'
Dec 15 10:26:10 compute-0 sudo[32313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:26:10 compute-0 python3.9[32315]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765794369.1342275-177-83112354833202/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:26:10 compute-0 sudo[32313]: pam_unix(sudo:session): session closed for user root
Dec 15 10:26:10 compute-0 sudo[32465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcapvsuavhzfjngmppcsaepszfbszsos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794370.5753734-222-161239050539259/AnsiballZ_setup.py'
Dec 15 10:26:10 compute-0 sudo[32465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:26:11 compute-0 python3.9[32467]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 15 10:26:11 compute-0 sudo[32465]: pam_unix(sudo:session): session closed for user root
Dec 15 10:26:11 compute-0 sudo[32621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyyoxvfcorjacqcyddvthbxmqqngbiwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794371.5536284-246-156213615414886/AnsiballZ_file.py'
Dec 15 10:26:11 compute-0 sudo[32621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:26:12 compute-0 python3.9[32623]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 15 10:26:12 compute-0 sudo[32621]: pam_unix(sudo:session): session closed for user root
Dec 15 10:26:12 compute-0 sudo[32773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfppmpjgdmkhcdrvfbxeugeoutktntjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794372.3635616-273-7083191079780/AnsiballZ_file.py'
Dec 15 10:26:12 compute-0 sudo[32773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:26:12 compute-0 python3.9[32775]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 15 10:26:12 compute-0 sudo[32773]: pam_unix(sudo:session): session closed for user root
Dec 15 10:26:13 compute-0 python3.9[32925]: ansible-ansible.builtin.service_facts Invoked
Dec 15 10:26:18 compute-0 python3.9[33178]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:26:19 compute-0 python3.9[33328]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 15 10:26:20 compute-0 python3.9[33482]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 15 10:26:21 compute-0 sudo[33638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yejjcwyezrvphfpwmncndyceukpjrcld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794381.2583358-417-57682254385000/AnsiballZ_setup.py'
Dec 15 10:26:21 compute-0 sudo[33638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:26:21 compute-0 python3.9[33640]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 15 10:26:22 compute-0 sudo[33638]: pam_unix(sudo:session): session closed for user root
Dec 15 10:26:22 compute-0 sudo[33722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gexvyzyofuweghunkkiekshrwrpaqpic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794381.2583358-417-57682254385000/AnsiballZ_dnf.py'
Dec 15 10:26:22 compute-0 sudo[33722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:26:22 compute-0 python3.9[33724]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 15 10:27:18 compute-0 systemd[1]: Reloading.
Dec 15 10:27:18 compute-0 systemd-rc-local-generator[33920]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:27:18 compute-0 systemd[1]: Starting dnf makecache...
Dec 15 10:27:18 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec 15 10:27:18 compute-0 dnf[33932]: Failed determining last makecache time.
Dec 15 10:27:18 compute-0 dnf[33932]: delorean-openstack-barbican-42b4c41831408a8e323 177 kB/s | 3.0 kB     00:00
Dec 15 10:27:18 compute-0 dnf[33932]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 204 kB/s | 3.0 kB     00:00
Dec 15 10:27:18 compute-0 dnf[33932]: delorean-openstack-cinder-1c00d6490d88e436f26ef 169 kB/s | 3.0 kB     00:00
Dec 15 10:27:18 compute-0 systemd[1]: Reloading.
Dec 15 10:27:18 compute-0 dnf[33932]: delorean-python-stevedore-c4acc5639fd2329372142 132 kB/s | 3.0 kB     00:00
Dec 15 10:27:18 compute-0 dnf[33932]: delorean-python-cloudkitty-tests-tempest-2c80f8 154 kB/s | 3.0 kB     00:00
Dec 15 10:27:18 compute-0 systemd-rc-local-generator[33964]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:27:18 compute-0 dnf[33932]: delorean-os-refresh-config-9bfc52b5049be2d8de61 138 kB/s | 3.0 kB     00:00
Dec 15 10:27:18 compute-0 dnf[33932]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 182 kB/s | 3.0 kB     00:00
Dec 15 10:27:18 compute-0 dnf[33932]: delorean-python-designate-tests-tempest-347fdbc 171 kB/s | 3.0 kB     00:00
Dec 15 10:27:19 compute-0 dnf[33932]: delorean-openstack-glance-1fd12c29b339f30fe823e 184 kB/s | 3.0 kB     00:00
Dec 15 10:27:19 compute-0 dnf[33932]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 161 kB/s | 3.0 kB     00:00
Dec 15 10:27:19 compute-0 dnf[33932]: delorean-openstack-manila-3c01b7181572c95dac462 148 kB/s | 3.0 kB     00:00
Dec 15 10:27:19 compute-0 dnf[33932]: delorean-python-whitebox-neutron-tests-tempest- 163 kB/s | 3.0 kB     00:00
Dec 15 10:27:19 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec 15 10:27:19 compute-0 dnf[33932]: delorean-openstack-octavia-ba397f07a7331190208c 161 kB/s | 3.0 kB     00:00
Dec 15 10:27:19 compute-0 dnf[33932]: delorean-openstack-watcher-c014f81a8647287f6dcc 150 kB/s | 3.0 kB     00:00
Dec 15 10:27:19 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec 15 10:27:19 compute-0 dnf[33932]: delorean-ansible-config_template-5ccaa22121a7ff 156 kB/s | 3.0 kB     00:00
Dec 15 10:27:19 compute-0 systemd[1]: Reloading.
Dec 15 10:27:19 compute-0 dnf[33932]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 170 kB/s | 3.0 kB     00:00
Dec 15 10:27:19 compute-0 dnf[33932]: delorean-openstack-swift-dc98a8463506ac520c469a 160 kB/s | 3.0 kB     00:00
Dec 15 10:27:19 compute-0 dnf[33932]: delorean-python-tempestconf-8515371b7cceebd4282 156 kB/s | 3.0 kB     00:00
Dec 15 10:27:19 compute-0 systemd-rc-local-generator[34021]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:27:19 compute-0 dnf[33932]: delorean-openstack-heat-ui-013accbfd179753bc3f0 132 kB/s | 3.0 kB     00:00
Dec 15 10:27:19 compute-0 dnf[33932]: CentOS Stream 9 - BaseOS                         72 kB/s | 7.3 kB     00:00
Dec 15 10:27:19 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Dec 15 10:27:19 compute-0 dnf[33932]: CentOS Stream 9 - AppStream                      78 kB/s | 7.8 kB     00:00
Dec 15 10:27:19 compute-0 dbus-broker-launch[752]: Noticed file-system modification, trigger reload.
Dec 15 10:27:19 compute-0 dbus-broker-launch[752]: Noticed file-system modification, trigger reload.
Dec 15 10:27:19 compute-0 dbus-broker-launch[752]: Noticed file-system modification, trigger reload.
Dec 15 10:27:19 compute-0 dbus-broker-launch[752]: Noticed file-system modification, trigger reload.
Dec 15 10:27:19 compute-0 dbus-broker-launch[752]: Noticed file-system modification, trigger reload.
Dec 15 10:27:19 compute-0 dnf[33932]: CentOS Stream 9 - CRB                            73 kB/s | 7.2 kB     00:00
Dec 15 10:27:20 compute-0 dnf[33932]: CentOS Stream 9 - Extras packages                28 kB/s | 8.3 kB     00:00
Dec 15 10:27:20 compute-0 dnf[33932]: dlrn-antelope-testing                           169 kB/s | 3.0 kB     00:00
Dec 15 10:27:20 compute-0 dnf[33932]: dlrn-antelope-build-deps                        165 kB/s | 3.0 kB     00:00
Dec 15 10:27:20 compute-0 dnf[33932]: centos9-rabbitmq                                120 kB/s | 3.0 kB     00:00
Dec 15 10:27:20 compute-0 dnf[33932]: centos9-storage                                 119 kB/s | 3.0 kB     00:00
Dec 15 10:27:20 compute-0 dnf[33932]: centos9-opstools                                132 kB/s | 3.0 kB     00:00
Dec 15 10:27:20 compute-0 dnf[33932]: NFV SIG OpenvSwitch                             135 kB/s | 3.0 kB     00:00
Dec 15 10:27:20 compute-0 dnf[33932]: repo-setup-centos-appstream                     195 kB/s | 4.4 kB     00:00
Dec 15 10:27:20 compute-0 dnf[33932]: repo-setup-centos-baseos                        203 kB/s | 3.9 kB     00:00
Dec 15 10:27:20 compute-0 dnf[33932]: repo-setup-centos-highavailability              157 kB/s | 3.9 kB     00:00
Dec 15 10:27:20 compute-0 dnf[33932]: repo-setup-centos-powertools                     18 kB/s | 4.3 kB     00:00
Dec 15 10:27:21 compute-0 dnf[33932]: Extra Packages for Enterprise Linux 9 - x86_64   88 kB/s |  28 kB     00:00
Dec 15 10:27:21 compute-0 dnf[33932]: Metadata cache created.
Dec 15 10:27:21 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 15 10:27:21 compute-0 systemd[1]: Finished dnf makecache.
Dec 15 10:27:21 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.945s CPU time.
Dec 15 10:28:25 compute-0 kernel: SELinux:  Converting 2718 SID table entries...
Dec 15 10:28:25 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 15 10:28:25 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 15 10:28:25 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 15 10:28:25 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 15 10:28:25 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 15 10:28:25 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 15 10:28:25 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 15 10:28:25 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec 15 10:28:25 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 15 10:28:25 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 15 10:28:25 compute-0 systemd[1]: Reloading.
Dec 15 10:28:25 compute-0 systemd-rc-local-generator[34387]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:28:25 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 15 10:28:26 compute-0 sudo[33722]: pam_unix(sudo:session): session closed for user root
Dec 15 10:28:27 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 15 10:28:27 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 15 10:28:27 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.146s CPU time.
Dec 15 10:28:27 compute-0 systemd[1]: run-rd40625ea2ae8485b9a6cbf6625f9e5f0.service: Deactivated successfully.
Dec 15 10:28:29 compute-0 sudo[35293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlhnlrejeugtuwswiytmfpwqqvfxqzbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794509.2913516-453-71910826137856/AnsiballZ_command.py'
Dec 15 10:28:29 compute-0 sudo[35293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:28:29 compute-0 python3.9[35295]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:28:30 compute-0 sudo[35293]: pam_unix(sudo:session): session closed for user root
Dec 15 10:28:31 compute-0 sudo[35574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qprpngacnindjmbcrzihjduepkivvbih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794511.0098965-477-280961962057565/AnsiballZ_selinux.py'
Dec 15 10:28:31 compute-0 sudo[35574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:28:31 compute-0 python3.9[35576]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 15 10:28:31 compute-0 sudo[35574]: pam_unix(sudo:session): session closed for user root
Dec 15 10:28:32 compute-0 sudo[35726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdwfdnwldythfusgljrrnhhbmyowabou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794512.3262367-510-213301812272231/AnsiballZ_command.py'
Dec 15 10:28:32 compute-0 sudo[35726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:28:32 compute-0 python3.9[35728]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 15 10:28:34 compute-0 sudo[35726]: pam_unix(sudo:session): session closed for user root
Dec 15 10:28:35 compute-0 sudo[35879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uusblzoednjqrvpyzunnsanwkjhdgqbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794514.9773574-534-155638981237756/AnsiballZ_file.py'
Dec 15 10:28:35 compute-0 sudo[35879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:28:35 compute-0 python3.9[35881]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:28:35 compute-0 sudo[35879]: pam_unix(sudo:session): session closed for user root
Dec 15 10:28:37 compute-0 sudo[36031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhaefzrtszwjecwsnlrqvpwmnuifpyjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794517.051311-558-144569909766066/AnsiballZ_mount.py'
Dec 15 10:28:37 compute-0 sudo[36031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:28:37 compute-0 python3.9[36033]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 15 10:28:37 compute-0 sudo[36031]: pam_unix(sudo:session): session closed for user root
Dec 15 10:28:39 compute-0 sudo[36183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qerktjqtxvflqkxvrngazxobwriwvdlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794518.935013-642-2379540777811/AnsiballZ_file.py'
Dec 15 10:28:39 compute-0 sudo[36183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:28:39 compute-0 python3.9[36185]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 15 10:28:39 compute-0 sudo[36183]: pam_unix(sudo:session): session closed for user root
Dec 15 10:28:39 compute-0 sudo[36335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouqmlwlwwddsrzhrkyzkwnabnnqjglcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794519.6432903-666-72327032797581/AnsiballZ_stat.py'
Dec 15 10:28:39 compute-0 sudo[36335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:28:40 compute-0 python3.9[36337]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:28:40 compute-0 sudo[36335]: pam_unix(sudo:session): session closed for user root
Dec 15 10:28:40 compute-0 sudo[36458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqxfjytrfuvnfvbbcfchufqcazigpcyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794519.6432903-666-72327032797581/AnsiballZ_copy.py'
Dec 15 10:28:40 compute-0 sudo[36458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:28:42 compute-0 python3.9[36460]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765794519.6432903-666-72327032797581/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73339c28e2006c1c4a421ed6f185d315a48e1394 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:28:42 compute-0 sudo[36458]: pam_unix(sudo:session): session closed for user root
Dec 15 10:28:49 compute-0 sudo[36610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzgyhqglukcfuvoysfcghklpsgtaytpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794528.9911094-738-246307014557791/AnsiballZ_stat.py'
Dec 15 10:28:49 compute-0 sudo[36610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:28:49 compute-0 python3.9[36612]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 15 10:28:49 compute-0 sudo[36610]: pam_unix(sudo:session): session closed for user root
Dec 15 10:28:50 compute-0 sudo[36762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyazkxrksdgaqscyvkjinyfrldchhomy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794529.800702-762-191308505491033/AnsiballZ_command.py'
Dec 15 10:28:50 compute-0 sudo[36762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:28:50 compute-0 python3.9[36764]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:28:50 compute-0 sudo[36762]: pam_unix(sudo:session): session closed for user root
Dec 15 10:28:50 compute-0 sudo[36915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpwxdnhzpuiwumkjtlvcwlmlpdluydjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794530.502197-786-29864783531641/AnsiballZ_file.py'
Dec 15 10:28:50 compute-0 sudo[36915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:28:50 compute-0 python3.9[36917]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:28:50 compute-0 sudo[36915]: pam_unix(sudo:session): session closed for user root
Dec 15 10:28:51 compute-0 sudo[37067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxdcsyadvvfwfidqzwukjzewriwyqeur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794531.3889832-819-88812982807340/AnsiballZ_getent.py'
Dec 15 10:28:51 compute-0 sudo[37067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:28:51 compute-0 python3.9[37069]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 15 10:28:52 compute-0 sudo[37067]: pam_unix(sudo:session): session closed for user root
Dec 15 10:28:52 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 15 10:28:52 compute-0 sudo[37221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dykrcfwaotxsslcacdpvlltvzaqshdit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794532.1986597-843-105260156129580/AnsiballZ_group.py'
Dec 15 10:28:52 compute-0 sudo[37221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:28:52 compute-0 python3.9[37223]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 15 10:28:52 compute-0 groupadd[37224]: group added to /etc/group: name=qemu, GID=107
Dec 15 10:28:52 compute-0 groupadd[37224]: group added to /etc/gshadow: name=qemu
Dec 15 10:28:53 compute-0 groupadd[37224]: new group: name=qemu, GID=107
Dec 15 10:28:53 compute-0 sudo[37221]: pam_unix(sudo:session): session closed for user root
Dec 15 10:28:53 compute-0 sudo[37379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guwdlfjqmymnqunynxcorqdqhwwmtxeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794533.2283015-867-120529448641944/AnsiballZ_user.py'
Dec 15 10:28:53 compute-0 sudo[37379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:28:54 compute-0 python3.9[37381]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 15 10:28:54 compute-0 useradd[37383]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Dec 15 10:28:54 compute-0 sudo[37379]: pam_unix(sudo:session): session closed for user root
Dec 15 10:28:55 compute-0 sudo[37539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyvmqvdbqkqscwtoifisndbiqezdwrqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794534.991315-891-231031649370718/AnsiballZ_getent.py'
Dec 15 10:28:55 compute-0 sudo[37539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:28:55 compute-0 python3.9[37541]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 15 10:28:55 compute-0 sudo[37539]: pam_unix(sudo:session): session closed for user root
Dec 15 10:28:56 compute-0 sudo[37692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwdqejuecmjzfvcqhbrpcllkmqiovvve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794535.7548616-915-224779258785560/AnsiballZ_group.py'
Dec 15 10:28:56 compute-0 sudo[37692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:28:56 compute-0 python3.9[37694]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 15 10:28:57 compute-0 groupadd[37695]: group added to /etc/group: name=hugetlbfs, GID=42477
Dec 15 10:28:57 compute-0 groupadd[37695]: group added to /etc/gshadow: name=hugetlbfs
Dec 15 10:28:57 compute-0 groupadd[37695]: new group: name=hugetlbfs, GID=42477
Dec 15 10:28:57 compute-0 sudo[37692]: pam_unix(sudo:session): session closed for user root
Dec 15 10:28:57 compute-0 sudo[37850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyjtmpljnbmguilebtdagrodzyxkfnab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794537.5921333-942-250169645871198/AnsiballZ_file.py'
Dec 15 10:28:57 compute-0 sudo[37850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:28:58 compute-0 python3.9[37852]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 15 10:28:58 compute-0 sudo[37850]: pam_unix(sudo:session): session closed for user root
Dec 15 10:28:59 compute-0 sudo[38002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgzxfwabszsashezpknwmwhcvalrwsyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794538.857953-975-93923288443914/AnsiballZ_dnf.py'
Dec 15 10:28:59 compute-0 sudo[38002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:28:59 compute-0 python3.9[38004]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 15 10:29:01 compute-0 sudo[38002]: pam_unix(sudo:session): session closed for user root
Dec 15 10:29:02 compute-0 sudo[38155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmdclynbobbcxlllfczzshezleyqfntv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794542.0708463-999-263649640570805/AnsiballZ_file.py'
Dec 15 10:29:02 compute-0 sudo[38155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:29:02 compute-0 python3.9[38157]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 15 10:29:02 compute-0 sudo[38155]: pam_unix(sudo:session): session closed for user root
Dec 15 10:29:03 compute-0 sudo[38307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nusggzwxaefrqlhdkrynckjdqphnkrqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794543.0576618-1023-191169305348742/AnsiballZ_stat.py'
Dec 15 10:29:03 compute-0 sudo[38307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:29:03 compute-0 python3.9[38309]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:29:03 compute-0 sudo[38307]: pam_unix(sudo:session): session closed for user root
Dec 15 10:29:03 compute-0 sudo[38430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfaydysdklqwpvdkfaqctjkxitwfzesr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794543.0576618-1023-191169305348742/AnsiballZ_copy.py'
Dec 15 10:29:03 compute-0 sudo[38430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:29:04 compute-0 python3.9[38432]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765794543.0576618-1023-191169305348742/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 15 10:29:04 compute-0 sudo[38430]: pam_unix(sudo:session): session closed for user root
Dec 15 10:29:05 compute-0 sudo[38582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rszmjddedrkxeznlvibauvuulzrkuwro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794544.7090695-1068-233241415815172/AnsiballZ_systemd.py'
Dec 15 10:29:05 compute-0 sudo[38582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:29:05 compute-0 python3.9[38584]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 15 10:29:05 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 15 10:29:05 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 15 10:29:05 compute-0 kernel: Bridge firewalling registered
Dec 15 10:29:05 compute-0 systemd-modules-load[38588]: Inserted module 'br_netfilter'
Dec 15 10:29:05 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 15 10:29:05 compute-0 sudo[38582]: pam_unix(sudo:session): session closed for user root
Dec 15 10:29:06 compute-0 sudo[38742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wojiqxocmcndomzrkssiocwwwovquetj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794545.9179142-1092-246739810191909/AnsiballZ_stat.py'
Dec 15 10:29:06 compute-0 sudo[38742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:29:06 compute-0 python3.9[38744]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:29:06 compute-0 sudo[38742]: pam_unix(sudo:session): session closed for user root
Dec 15 10:29:06 compute-0 sudo[38865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psftbfqaivbcarehifekyhsordxfsqgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794545.9179142-1092-246739810191909/AnsiballZ_copy.py'
Dec 15 10:29:06 compute-0 sudo[38865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:29:06 compute-0 python3.9[38867]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765794545.9179142-1092-246739810191909/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 15 10:29:06 compute-0 sudo[38865]: pam_unix(sudo:session): session closed for user root
Dec 15 10:29:07 compute-0 sudo[39017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzgrksctdbipqybhfzuzunoyrdyitwkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794547.5449758-1146-141389861733176/AnsiballZ_dnf.py'
Dec 15 10:29:07 compute-0 sudo[39017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:29:08 compute-0 python3.9[39019]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 15 10:29:11 compute-0 dbus-broker-launch[752]: Noticed file-system modification, trigger reload.
Dec 15 10:29:11 compute-0 dbus-broker-launch[752]: Noticed file-system modification, trigger reload.
Dec 15 10:29:12 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 15 10:29:12 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 15 10:29:12 compute-0 systemd[1]: Reloading.
Dec 15 10:29:12 compute-0 systemd-rc-local-generator[39083]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:29:12 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 15 10:29:12 compute-0 sudo[39017]: pam_unix(sudo:session): session closed for user root
Dec 15 10:29:13 compute-0 python3.9[40472]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 15 10:29:14 compute-0 python3.9[41345]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 15 10:29:15 compute-0 python3.9[42098]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 15 10:29:16 compute-0 sudo[42916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtsypweahrvsobaloyxhxhnxctzykcgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794555.764076-1263-70980644142720/AnsiballZ_command.py'
Dec 15 10:29:16 compute-0 sudo[42916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:29:16 compute-0 python3.9[42946]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:29:16 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 15 10:29:16 compute-0 systemd[1]: Starting Authorization Manager...
Dec 15 10:29:16 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 15 10:29:16 compute-0 polkitd[43395]: Started polkitd version 0.117
Dec 15 10:29:16 compute-0 polkitd[43395]: Loading rules from directory /etc/polkit-1/rules.d
Dec 15 10:29:16 compute-0 polkitd[43395]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 15 10:29:16 compute-0 polkitd[43395]: Finished loading, compiling and executing 2 rules
Dec 15 10:29:16 compute-0 systemd[1]: Started Authorization Manager.
Dec 15 10:29:16 compute-0 polkitd[43395]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Dec 15 10:29:17 compute-0 sudo[42916]: pam_unix(sudo:session): session closed for user root
Dec 15 10:29:17 compute-0 sudo[43563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ricqfhoyfqocrokcaxakrxbahirtmees ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794557.2680724-1290-32358206380181/AnsiballZ_systemd.py'
Dec 15 10:29:17 compute-0 sudo[43563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:29:17 compute-0 python3.9[43565]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 15 10:29:17 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 15 10:29:18 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Dec 15 10:29:18 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 15 10:29:18 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 15 10:29:18 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 15 10:29:18 compute-0 sudo[43563]: pam_unix(sudo:session): session closed for user root
Dec 15 10:29:18 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 15 10:29:18 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 15 10:29:18 compute-0 systemd[1]: man-db-cache-update.service: Consumed 5.347s CPU time.
Dec 15 10:29:18 compute-0 systemd[1]: run-r9abc19e407fc4cdebc6719ddc2df42fb.service: Deactivated successfully.
Dec 15 10:29:18 compute-0 python3.9[43727]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 15 10:29:22 compute-0 sudo[43877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvekacvqbiqvnioefyfpsrciqtrcdtpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794561.9531643-1461-183982622712247/AnsiballZ_systemd.py'
Dec 15 10:29:22 compute-0 sudo[43877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:29:22 compute-0 python3.9[43879]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 15 10:29:22 compute-0 systemd[1]: Reloading.
Dec 15 10:29:22 compute-0 systemd-rc-local-generator[43907]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:29:23 compute-0 sudo[43877]: pam_unix(sudo:session): session closed for user root
Dec 15 10:29:23 compute-0 sudo[44066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tekyofuznztzbadlqkyatdahrgekvemn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794563.4113638-1461-129217411258016/AnsiballZ_systemd.py'
Dec 15 10:29:23 compute-0 sudo[44066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:29:24 compute-0 python3.9[44068]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 15 10:29:24 compute-0 systemd[1]: Reloading.
Dec 15 10:29:24 compute-0 systemd-rc-local-generator[44097]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:29:24 compute-0 sudo[44066]: pam_unix(sudo:session): session closed for user root
Dec 15 10:29:24 compute-0 sudo[44254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvmquxcsxesrvgmqybyoxirobrnunbrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794564.5814304-1509-119108768441988/AnsiballZ_command.py'
Dec 15 10:29:24 compute-0 sudo[44254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:29:25 compute-0 python3.9[44256]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:29:25 compute-0 sudo[44254]: pam_unix(sudo:session): session closed for user root
Dec 15 10:29:25 compute-0 sudo[44407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhzrkxqmywozfpcalgxxnfwwylihdrzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794565.2649953-1533-65725942962258/AnsiballZ_command.py'
Dec 15 10:29:25 compute-0 sudo[44407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:29:25 compute-0 python3.9[44409]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:29:25 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec 15 10:29:25 compute-0 sudo[44407]: pam_unix(sudo:session): session closed for user root
Dec 15 10:29:26 compute-0 sudo[44560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxcldhueqvcukirkcwjzgoqentbimejz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794565.9391506-1557-265363569844323/AnsiballZ_command.py'
Dec 15 10:29:26 compute-0 sudo[44560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:29:26 compute-0 python3.9[44562]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:29:27 compute-0 sudo[44560]: pam_unix(sudo:session): session closed for user root
Dec 15 10:29:28 compute-0 sudo[44722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqhmikusdoksshjdklpkdhciaymsbrey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794568.0496602-1581-66694519461417/AnsiballZ_command.py'
Dec 15 10:29:28 compute-0 sudo[44722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:29:28 compute-0 python3.9[44724]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:29:28 compute-0 sudo[44722]: pam_unix(sudo:session): session closed for user root
Dec 15 10:29:29 compute-0 sudo[44875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etkqyaudpulukplbpcfozdmxtzvthwgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794568.7356746-1605-249516683977753/AnsiballZ_systemd.py'
Dec 15 10:29:29 compute-0 sudo[44875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:29:29 compute-0 python3.9[44877]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 15 10:29:29 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 15 10:29:29 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Dec 15 10:29:29 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Dec 15 10:29:29 compute-0 systemd[1]: Starting Apply Kernel Variables...
Dec 15 10:29:29 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 15 10:29:29 compute-0 systemd[1]: Finished Apply Kernel Variables.
Dec 15 10:29:29 compute-0 sudo[44875]: pam_unix(sudo:session): session closed for user root
Dec 15 10:29:29 compute-0 sshd-session[31259]: Connection closed by 192.168.122.30 port 46084
Dec 15 10:29:29 compute-0 sshd-session[31256]: pam_unix(sshd:session): session closed for user zuul
Dec 15 10:29:29 compute-0 systemd-logind[797]: Session 9 logged out. Waiting for processes to exit.
Dec 15 10:29:29 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Dec 15 10:29:29 compute-0 systemd[1]: session-9.scope: Consumed 2min 28.592s CPU time.
Dec 15 10:29:29 compute-0 systemd-logind[797]: Removed session 9.
Dec 15 10:29:39 compute-0 sshd-session[44908]: Accepted publickey for zuul from 192.168.122.30 port 52062 ssh2: ECDSA SHA256:RI7NGykFU6DY8VmZwamQwcvn1msDfj+uaMVMpAHHUfo
Dec 15 10:29:40 compute-0 systemd-logind[797]: New session 10 of user zuul.
Dec 15 10:29:40 compute-0 systemd[1]: Started Session 10 of User zuul.
Dec 15 10:29:40 compute-0 sshd-session[44908]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 15 10:29:41 compute-0 python3.9[45061]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 15 10:29:42 compute-0 sudo[45215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rygxctxtgnadsfgwecynwtjmdritdzdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794581.686877-68-130827055632145/AnsiballZ_getent.py'
Dec 15 10:29:42 compute-0 sudo[45215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:29:42 compute-0 python3.9[45217]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 15 10:29:42 compute-0 sudo[45215]: pam_unix(sudo:session): session closed for user root
Dec 15 10:29:42 compute-0 sudo[45368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amigqigcfzttbfsvazdrfgfqguutkwgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794582.4920633-92-150077279600094/AnsiballZ_group.py'
Dec 15 10:29:42 compute-0 sudo[45368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:29:43 compute-0 python3.9[45370]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 15 10:29:43 compute-0 groupadd[45371]: group added to /etc/group: name=openvswitch, GID=42476
Dec 15 10:29:43 compute-0 groupadd[45371]: group added to /etc/gshadow: name=openvswitch
Dec 15 10:29:43 compute-0 groupadd[45371]: new group: name=openvswitch, GID=42476
Dec 15 10:29:43 compute-0 sudo[45368]: pam_unix(sudo:session): session closed for user root
Dec 15 10:29:43 compute-0 sudo[45526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxutiugwnftsjckkbhkynmfdxctasslj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794583.3928716-116-244585213369066/AnsiballZ_user.py'
Dec 15 10:29:43 compute-0 sudo[45526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:29:44 compute-0 python3.9[45528]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 15 10:29:44 compute-0 useradd[45530]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Dec 15 10:29:44 compute-0 useradd[45530]: add 'openvswitch' to group 'hugetlbfs'
Dec 15 10:29:44 compute-0 useradd[45530]: add 'openvswitch' to shadow group 'hugetlbfs'
Dec 15 10:29:44 compute-0 sudo[45526]: pam_unix(sudo:session): session closed for user root
Dec 15 10:29:45 compute-0 sudo[45686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyefjdhydevdvcwkrgmryqbpfbmwdcal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794584.5622525-146-165201231906612/AnsiballZ_setup.py'
Dec 15 10:29:45 compute-0 sudo[45686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:29:45 compute-0 python3.9[45688]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 15 10:29:45 compute-0 sudo[45686]: pam_unix(sudo:session): session closed for user root
Dec 15 10:29:45 compute-0 sudo[45770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvsqaoqynxcltdknrmyukqffuxzlmwmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794584.5622525-146-165201231906612/AnsiballZ_dnf.py'
Dec 15 10:29:45 compute-0 sudo[45770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:29:46 compute-0 python3.9[45772]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 15 10:29:48 compute-0 sudo[45770]: pam_unix(sudo:session): session closed for user root
Dec 15 10:29:50 compute-0 sudo[45936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkafhgxoljbxxdpdqrxhuhwesubqqqqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794589.980112-188-52113037926921/AnsiballZ_dnf.py'
Dec 15 10:29:50 compute-0 sudo[45936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:29:50 compute-0 python3.9[45938]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 15 10:30:02 compute-0 kernel: SELinux:  Converting 2731 SID table entries...
Dec 15 10:30:02 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 15 10:30:02 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 15 10:30:02 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 15 10:30:02 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 15 10:30:02 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 15 10:30:02 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 15 10:30:02 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 15 10:30:02 compute-0 groupadd[45961]: group added to /etc/group: name=unbound, GID=993
Dec 15 10:30:02 compute-0 groupadd[45961]: group added to /etc/gshadow: name=unbound
Dec 15 10:30:02 compute-0 groupadd[45961]: new group: name=unbound, GID=993
Dec 15 10:30:02 compute-0 useradd[45968]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Dec 15 10:30:02 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec 15 10:30:02 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec 15 10:30:04 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 15 10:30:04 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 15 10:30:04 compute-0 systemd[1]: Reloading.
Dec 15 10:30:04 compute-0 systemd-rc-local-generator[46465]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:30:04 compute-0 systemd-sysv-generator[46468]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:30:04 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 15 10:30:05 compute-0 sudo[45936]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:05 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 15 10:30:05 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 15 10:30:05 compute-0 systemd[1]: run-r462b9dec4ace4d79a42f0f44859042e3.service: Deactivated successfully.
Dec 15 10:30:06 compute-0 sudo[47034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qflqhyoolwmgqflpawpdznhssnhfwzbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794606.1822033-212-163853747774252/AnsiballZ_systemd.py'
Dec 15 10:30:06 compute-0 sudo[47034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:07 compute-0 python3.9[47036]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 15 10:30:07 compute-0 systemd[1]: Reloading.
Dec 15 10:30:07 compute-0 systemd-sysv-generator[47071]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:30:07 compute-0 systemd-rc-local-generator[47067]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:30:07 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Dec 15 10:30:07 compute-0 chown[47078]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec 15 10:30:07 compute-0 ovs-ctl[47083]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec 15 10:30:07 compute-0 ovs-ctl[47083]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec 15 10:30:07 compute-0 ovs-ctl[47083]: Starting ovsdb-server [  OK  ]
Dec 15 10:30:07 compute-0 ovs-vsctl[47132]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec 15 10:30:07 compute-0 ovs-vsctl[47152]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"ef4e8cd2-4818-4670-b0b5-31dc6d559800\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec 15 10:30:07 compute-0 ovs-ctl[47083]: Configuring Open vSwitch system IDs [  OK  ]
Dec 15 10:30:07 compute-0 ovs-ctl[47083]: Enabling remote OVSDB managers [  OK  ]
Dec 15 10:30:07 compute-0 ovs-vsctl[47158]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 15 10:30:07 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Dec 15 10:30:07 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec 15 10:30:07 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec 15 10:30:07 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec 15 10:30:07 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Dec 15 10:30:07 compute-0 ovs-ctl[47203]: Inserting openvswitch module [  OK  ]
Dec 15 10:30:07 compute-0 ovs-ctl[47172]: Starting ovs-vswitchd [  OK  ]
Dec 15 10:30:07 compute-0 ovs-vsctl[47224]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 15 10:30:07 compute-0 ovs-ctl[47172]: Enabling remote OVSDB managers [  OK  ]
Dec 15 10:30:07 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec 15 10:30:07 compute-0 systemd[1]: Starting Open vSwitch...
Dec 15 10:30:07 compute-0 systemd[1]: Finished Open vSwitch.
Dec 15 10:30:07 compute-0 sudo[47034]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:08 compute-0 python3.9[47375]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 15 10:30:09 compute-0 sudo[47525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skrmjyzsbspzfyhfzcjcfhslrfwscgaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794609.1465495-266-148223678429396/AnsiballZ_sefcontext.py'
Dec 15 10:30:09 compute-0 sudo[47525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:09 compute-0 python3.9[47527]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 15 10:30:10 compute-0 kernel: SELinux:  Converting 2745 SID table entries...
Dec 15 10:30:10 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 15 10:30:10 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 15 10:30:10 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 15 10:30:10 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 15 10:30:10 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 15 10:30:10 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 15 10:30:10 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 15 10:30:11 compute-0 sudo[47525]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:13 compute-0 python3.9[47682]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 15 10:30:14 compute-0 sudo[47838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjcjfpilteanenbuhohdjpefxsjsvzjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794614.172317-320-11833424002515/AnsiballZ_dnf.py'
Dec 15 10:30:14 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec 15 10:30:14 compute-0 sudo[47838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:14 compute-0 python3.9[47840]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 15 10:30:16 compute-0 sudo[47838]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:16 compute-0 sudo[47991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouknxqqtqtgvgffzqvqbwsjqevnilfeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794616.30408-344-122326997495795/AnsiballZ_command.py'
Dec 15 10:30:16 compute-0 sudo[47991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:16 compute-0 python3.9[47993]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:30:17 compute-0 sudo[47991]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:18 compute-0 sudo[48278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmeyygkmeqmvxsvkxjhcgcwvndutbuqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794617.835252-368-133361842757338/AnsiballZ_file.py'
Dec 15 10:30:18 compute-0 sudo[48278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:18 compute-0 python3.9[48280]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Dec 15 10:30:18 compute-0 sudo[48278]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:19 compute-0 python3.9[48430]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 15 10:30:19 compute-0 sudo[48582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjzxmqkdnijaunimfxhzztorcnjpmnnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794619.4969337-416-63618012109712/AnsiballZ_dnf.py'
Dec 15 10:30:19 compute-0 sudo[48582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:19 compute-0 python3.9[48584]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 15 10:30:21 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 15 10:30:21 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 15 10:30:21 compute-0 systemd[1]: Reloading.
Dec 15 10:30:22 compute-0 systemd-sysv-generator[48628]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:30:22 compute-0 systemd-rc-local-generator[48623]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:30:22 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 15 10:30:22 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 15 10:30:22 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 15 10:30:22 compute-0 systemd[1]: run-r33bb89a3f65047968bd2554b69eab9da.service: Deactivated successfully.
Dec 15 10:30:22 compute-0 sudo[48582]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:23 compute-0 sudo[48901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctvjckiunknzvavaqxarnmhtjltgjskb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794622.9497895-440-123337971640670/AnsiballZ_systemd.py'
Dec 15 10:30:23 compute-0 sudo[48901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:23 compute-0 python3.9[48903]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 15 10:30:23 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 15 10:30:23 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Dec 15 10:30:23 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Dec 15 10:30:23 compute-0 systemd[1]: Stopping Network Manager...
Dec 15 10:30:23 compute-0 NetworkManager[7187]: <info>  [1765794623.5408] caught SIGTERM, shutting down normally.
Dec 15 10:30:23 compute-0 NetworkManager[7187]: <info>  [1765794623.5434] dhcp4 (eth0): canceled DHCP transaction
Dec 15 10:30:23 compute-0 NetworkManager[7187]: <info>  [1765794623.5434] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 15 10:30:23 compute-0 NetworkManager[7187]: <info>  [1765794623.5434] dhcp4 (eth0): state changed no lease
Dec 15 10:30:23 compute-0 NetworkManager[7187]: <info>  [1765794623.5438] manager: NetworkManager state is now CONNECTED_SITE
Dec 15 10:30:23 compute-0 NetworkManager[7187]: <info>  [1765794623.5517] exiting (success)
Dec 15 10:30:23 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 15 10:30:23 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 15 10:30:23 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 15 10:30:23 compute-0 systemd[1]: Stopped Network Manager.
Dec 15 10:30:23 compute-0 systemd[1]: NetworkManager.service: Consumed 13.705s CPU time, 4.2M memory peak, read 0B from disk, written 30.0K to disk.
Dec 15 10:30:23 compute-0 systemd[1]: Starting Network Manager...
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.6183] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:f0a48d23-f548-4261-85f3-3468dc8c15f7)
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.6186] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.6238] manager[0x55ad9f84b000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 15 10:30:23 compute-0 systemd[1]: Starting Hostname Service...
Dec 15 10:30:23 compute-0 systemd[1]: Started Hostname Service.
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7064] hostname: hostname: using hostnamed
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7065] hostname: static hostname changed from (none) to "compute-0"
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7076] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7082] manager[0x55ad9f84b000]: rfkill: Wi-Fi hardware radio set enabled
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7083] manager[0x55ad9f84b000]: rfkill: WWAN hardware radio set enabled
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7104] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-ovs.so)
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7115] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7116] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7116] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7117] manager: Networking is enabled by state file
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7119] settings: Loaded settings plugin: keyfile (internal)
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7123] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7147] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7157] dhcp: init: Using DHCP client 'internal'
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7160] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7166] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7171] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7178] device (lo): Activation: starting connection 'lo' (e64a39bd-9875-4e86-a1ed-975879eaa15a)
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7185] device (eth0): carrier: link connected
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7189] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7194] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7194] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7199] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7204] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7209] device (eth1): carrier: link connected
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7212] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7216] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (b505dc75-3963-5da7-bfe2-a0606373c56e) (indicated)
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7216] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7220] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7227] device (eth1): Activation: starting connection 'ci-private-network' (b505dc75-3963-5da7-bfe2-a0606373c56e)
Dec 15 10:30:23 compute-0 systemd[1]: Started Network Manager.
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7234] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7254] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7257] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7258] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7260] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7264] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7267] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7269] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7275] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7281] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7284] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7294] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7306] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7313] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7315] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7323] device (lo): Activation: successful, device activated.
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7329] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7331] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7334] manager: NetworkManager state is now CONNECTED_LOCAL
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7336] device (eth1): Activation: successful, device activated.
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7344] dhcp4 (eth0): state changed new lease, address=38.102.83.5
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7349] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 15 10:30:23 compute-0 systemd[1]: Starting Network Manager Wait Online...
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7412] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7501] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7503] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7508] manager: NetworkManager state is now CONNECTED_SITE
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7510] device (eth0): Activation: successful, device activated.
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7514] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 15 10:30:23 compute-0 NetworkManager[48915]: <info>  [1765794623.7517] manager: startup complete
Dec 15 10:30:23 compute-0 sudo[48901]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:23 compute-0 systemd[1]: Finished Network Manager Wait Online.
Dec 15 10:30:24 compute-0 sudo[49127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kouzwrsbmvfzaqrjjugnsfqguxtaphln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794623.9676661-464-252640962314703/AnsiballZ_dnf.py'
Dec 15 10:30:24 compute-0 sudo[49127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:24 compute-0 python3.9[49129]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 15 10:30:30 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 15 10:30:30 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 15 10:30:30 compute-0 systemd[1]: Reloading.
Dec 15 10:30:30 compute-0 systemd-rc-local-generator[49181]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:30:30 compute-0 systemd-sysv-generator[49185]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:30:31 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 15 10:30:31 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 15 10:30:31 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 15 10:30:31 compute-0 systemd[1]: run-r1d1cac458a3b4e838ab2255b8f52d839.service: Deactivated successfully.
Dec 15 10:30:32 compute-0 sudo[49127]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:33 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 15 10:30:35 compute-0 sudo[49587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqjzbajprtylxuldbkskuzcsvqtlsmjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794635.309983-500-1729497325089/AnsiballZ_stat.py'
Dec 15 10:30:35 compute-0 sudo[49587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:36 compute-0 python3.9[49589]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 15 10:30:36 compute-0 sudo[49587]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:36 compute-0 sudo[49739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyskqfqxtfstcyxcfdueygootbekwgot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794636.232679-527-192248441868487/AnsiballZ_ini_file.py'
Dec 15 10:30:36 compute-0 sudo[49739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:36 compute-0 python3.9[49741]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:30:36 compute-0 sudo[49739]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:37 compute-0 sudo[49893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znjkwnibxlkiakatnbxsiablkvxojdou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794637.3782065-557-150650107787782/AnsiballZ_ini_file.py'
Dec 15 10:30:37 compute-0 sudo[49893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:37 compute-0 python3.9[49895]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:30:37 compute-0 sudo[49893]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:38 compute-0 sudo[50045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyfvzvhwywqgkntpckmsyykghefwautd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794638.1151726-557-16486516693554/AnsiballZ_ini_file.py'
Dec 15 10:30:38 compute-0 sudo[50045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:38 compute-0 python3.9[50047]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:30:38 compute-0 sudo[50045]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:39 compute-0 sudo[50197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfhincegbkxfffsyonpjxzkbeezbryxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794638.8220031-602-99855763625780/AnsiballZ_ini_file.py'
Dec 15 10:30:39 compute-0 sudo[50197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:39 compute-0 python3.9[50199]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:30:39 compute-0 sudo[50197]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:39 compute-0 sudo[50349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spfqmalqoszdjkaxjtxbwluafhqqsosa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794639.5445862-602-176212468452309/AnsiballZ_ini_file.py'
Dec 15 10:30:39 compute-0 sudo[50349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:39 compute-0 python3.9[50351]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:30:39 compute-0 sudo[50349]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:40 compute-0 sudo[50501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blaijaqangwcsdkolgstcglawvhnzsjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794640.2051744-647-168971045783698/AnsiballZ_stat.py'
Dec 15 10:30:40 compute-0 sudo[50501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:40 compute-0 python3.9[50503]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:30:40 compute-0 sudo[50501]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:41 compute-0 sudo[50624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmrnegvlkunmbdtrvfnshegsxnndiqci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794640.2051744-647-168971045783698/AnsiballZ_copy.py'
Dec 15 10:30:41 compute-0 sudo[50624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:41 compute-0 python3.9[50626]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765794640.2051744-647-168971045783698/.source _original_basename=.ernfe0jl follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:30:41 compute-0 sudo[50624]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:41 compute-0 sudo[50776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkwbtcuwfxltszwbearnggwnhxsqfoel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794641.5266535-692-89279542402705/AnsiballZ_file.py'
Dec 15 10:30:41 compute-0 sudo[50776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:41 compute-0 python3.9[50778]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:30:42 compute-0 sudo[50776]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:42 compute-0 sudo[50928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-folhlgplziphxavzplrhxahvnpzxffvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794642.1944985-716-173738571160906/AnsiballZ_edpm_os_net_config_mappings.py'
Dec 15 10:30:42 compute-0 sudo[50928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:42 compute-0 python3.9[50930]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec 15 10:30:42 compute-0 sudo[50928]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:43 compute-0 sudo[51080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svzwqzxvtptvvzdnqxwrolwyzdsbycbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794643.066356-743-166863351941445/AnsiballZ_file.py'
Dec 15 10:30:43 compute-0 sudo[51080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:43 compute-0 python3.9[51082]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:30:43 compute-0 sudo[51080]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:44 compute-0 sudo[51232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okzihvmljohazybydgvgqvkdybyzhppz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794643.9000325-773-8686047526987/AnsiballZ_stat.py'
Dec 15 10:30:44 compute-0 sudo[51232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:44 compute-0 sudo[51232]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:44 compute-0 sudo[51355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkutkjlmgbxzlrfumkarpxogmvnhjelu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794643.9000325-773-8686047526987/AnsiballZ_copy.py'
Dec 15 10:30:44 compute-0 sudo[51355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:44 compute-0 sudo[51355]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:45 compute-0 sudo[51507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojooknqzgsxqivclztlqemsidsmrlgic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794645.1481023-818-27279097484115/AnsiballZ_slurp.py'
Dec 15 10:30:45 compute-0 sudo[51507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:45 compute-0 python3.9[51509]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec 15 10:30:45 compute-0 sudo[51507]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:46 compute-0 sudo[51682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvaajraiqydhrjhpjjfzocozpzznbeqm ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794645.9867394-845-182102361231539/async_wrapper.py j686071933130 300 /home/zuul/.ansible/tmp/ansible-tmp-1765794645.9867394-845-182102361231539/AnsiballZ_edpm_os_net_config.py _'
Dec 15 10:30:46 compute-0 sudo[51682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:46 compute-0 ansible-async_wrapper.py[51684]: Invoked with j686071933130 300 /home/zuul/.ansible/tmp/ansible-tmp-1765794645.9867394-845-182102361231539/AnsiballZ_edpm_os_net_config.py _
Dec 15 10:30:46 compute-0 ansible-async_wrapper.py[51687]: Starting module and watcher
Dec 15 10:30:46 compute-0 ansible-async_wrapper.py[51687]: Start watching 51688 (300)
Dec 15 10:30:46 compute-0 ansible-async_wrapper.py[51688]: Start module (51688)
Dec 15 10:30:46 compute-0 ansible-async_wrapper.py[51684]: Return async_wrapper task started.
Dec 15 10:30:46 compute-0 sudo[51682]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:47 compute-0 python3.9[51689]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec 15 10:30:47 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec 15 10:30:47 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec 15 10:30:47 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec 15 10:30:47 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec 15 10:30:47 compute-0 kernel: cfg80211: failed to load regulatory.db
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.7866] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51690 uid=0 result="success"
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.7888] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51690 uid=0 result="success"
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8484] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8486] audit: op="connection-add" uuid="a32711d4-1407-4709-a07a-8719ccee49ab" name="br-ex-br" pid=51690 uid=0 result="success"
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8503] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8504] audit: op="connection-add" uuid="4d9c29b2-1781-4dea-b147-f979ad1ee96e" name="br-ex-port" pid=51690 uid=0 result="success"
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8517] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8518] audit: op="connection-add" uuid="d5aa53d3-e27f-42dd-9be7-ab1a853121dc" name="eth1-port" pid=51690 uid=0 result="success"
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8530] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8531] audit: op="connection-add" uuid="04e0b7e4-66c1-4eb7-99ef-f135f14bd9be" name="vlan20-port" pid=51690 uid=0 result="success"
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8542] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8544] audit: op="connection-add" uuid="1008e1fc-a0e8-46ad-b279-069d560b9f7a" name="vlan21-port" pid=51690 uid=0 result="success"
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8554] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8555] audit: op="connection-add" uuid="93825051-ec83-4543-b7fd-258c9b8f5fc1" name="vlan22-port" pid=51690 uid=0 result="success"
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8566] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8567] audit: op="connection-add" uuid="d418d38b-d712-4f7f-ac71-d61278d49cf7" name="vlan23-port" pid=51690 uid=0 result="success"
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8585] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=51690 uid=0 result="success"
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8602] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8603] audit: op="connection-add" uuid="a206b2fe-6e41-4c85-b675-c64423eac9fc" name="br-ex-if" pid=51690 uid=0 result="success"
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8651] audit: op="connection-update" uuid="b505dc75-3963-5da7-bfe2-a0606373c56e" name="ci-private-network" args="connection.master,connection.port-type,connection.timestamp,connection.controller,connection.slave-type,ipv4.addresses,ipv4.routes,ipv4.dns,ipv4.method,ipv4.never-default,ipv4.routing-rules,ipv6.addresses,ipv6.routes,ipv6.dns,ipv6.addr-gen-mode,ipv6.method,ipv6.routing-rules,ovs-external-ids.data,ovs-interface.type" pid=51690 uid=0 result="success"
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8666] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8667] audit: op="connection-add" uuid="39432bce-5c32-4fb6-a471-5c9cc7eda346" name="vlan20-if" pid=51690 uid=0 result="success"
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8683] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8684] audit: op="connection-add" uuid="19e95323-ff36-4d9b-8801-800a8f0611d8" name="vlan21-if" pid=51690 uid=0 result="success"
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8698] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8700] audit: op="connection-add" uuid="9fe09d36-31f4-48fc-b17d-df32a083fa7f" name="vlan22-if" pid=51690 uid=0 result="success"
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8715] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8716] audit: op="connection-add" uuid="84c6945e-be1c-488c-91fc-073033c21d2e" name="vlan23-if" pid=51690 uid=0 result="success"
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8729] audit: op="connection-delete" uuid="2e657e63-4775-3f98-95ab-5b1da731b772" name="Wired connection 1" pid=51690 uid=0 result="success"
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8740] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <warn>  [1765794648.8742] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8748] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8751] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (a32711d4-1407-4709-a07a-8719ccee49ab)
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8751] audit: op="connection-activate" uuid="a32711d4-1407-4709-a07a-8719ccee49ab" name="br-ex-br" pid=51690 uid=0 result="success"
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8753] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <warn>  [1765794648.8754] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8759] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8763] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (4d9c29b2-1781-4dea-b147-f979ad1ee96e)
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8765] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <warn>  [1765794648.8765] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8769] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8773] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (d5aa53d3-e27f-42dd-9be7-ab1a853121dc)
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8775] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <warn>  [1765794648.8776] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8780] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8784] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (04e0b7e4-66c1-4eb7-99ef-f135f14bd9be)
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8785] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <warn>  [1765794648.8786] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8791] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8794] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (1008e1fc-a0e8-46ad-b279-069d560b9f7a)
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8796] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <warn>  [1765794648.8797] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8802] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8806] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (93825051-ec83-4543-b7fd-258c9b8f5fc1)
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8807] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <warn>  [1765794648.8808] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8813] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8817] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (d418d38b-d712-4f7f-ac71-d61278d49cf7)
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8817] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8820] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8822] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8827] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <warn>  [1765794648.8827] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8830] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8834] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (a206b2fe-6e41-4c85-b675-c64423eac9fc)
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8835] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8837] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8839] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8840] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8841] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8851] device (eth1): disconnecting for new activation request.
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8852] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8855] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8856] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8857] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8859] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <warn>  [1765794648.8860] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8863] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8868] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (39432bce-5c32-4fb6-a471-5c9cc7eda346)
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8868] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8872] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8874] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8876] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8878] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <warn>  [1765794648.8880] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8884] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8889] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (19e95323-ff36-4d9b-8801-800a8f0611d8)
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8890] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8893] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8895] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8896] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8899] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <warn>  [1765794648.8900] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8902] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8907] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (9fe09d36-31f4-48fc-b17d-df32a083fa7f)
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8908] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8911] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8913] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8914] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8917] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <warn>  [1765794648.8918] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8922] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8925] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (84c6945e-be1c-488c-91fc-073033c21d2e)
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8926] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8929] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8931] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8932] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8933] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8943] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.method,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=51690 uid=0 result="success"
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8945] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8948] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8950] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8964] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8968] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8971] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8974] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8976] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8980] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 kernel: ovs-system: entered promiscuous mode
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8984] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8986] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8988] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8992] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.8995] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 kernel: Timeout policy base is empty
Dec 15 10:30:48 compute-0 systemd-udevd[51695]: Network interface NamePolicy= disabled on kernel command line.
Dec 15 10:30:48 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9002] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9004] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9008] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9013] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9016] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9018] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9022] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9027] dhcp4 (eth0): canceled DHCP transaction
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9027] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9027] dhcp4 (eth0): state changed no lease
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9028] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9037] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9041] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51690 uid=0 result="fail" reason="Device is not activated"
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9081] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9088] dhcp4 (eth0): state changed new lease, address=38.102.83.5
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9091] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec 15 10:30:48 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9129] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9137] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9144] device (eth1): disconnecting for new activation request.
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9144] audit: op="connection-activate" uuid="b505dc75-3963-5da7-bfe2-a0606373c56e" name="ci-private-network" pid=51690 uid=0 result="success"
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9145] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9237] device (eth1): Activation: starting connection 'ci-private-network' (b505dc75-3963-5da7-bfe2-a0606373c56e)
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9241] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9260] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9265] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9271] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9275] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9281] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9282] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9284] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9285] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9287] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9288] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9290] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51690 uid=0 result="success"
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9292] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9298] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9303] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9306] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9309] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9314] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9318] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9322] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9325] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9329] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9333] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9336] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9339] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9346] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9349] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 kernel: br-ex: entered promiscuous mode
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9385] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9389] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9394] device (eth1): Activation: successful, device activated.
Dec 15 10:30:48 compute-0 kernel: vlan22: entered promiscuous mode
Dec 15 10:30:48 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9472] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9487] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 systemd-udevd[51694]: Network interface NamePolicy= disabled on kernel command line.
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9508] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9509] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9513] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 15 10:30:48 compute-0 kernel: vlan23: entered promiscuous mode
Dec 15 10:30:48 compute-0 systemd-udevd[51789]: Network interface NamePolicy= disabled on kernel command line.
Dec 15 10:30:48 compute-0 kernel: vlan20: entered promiscuous mode
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9609] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9623] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 kernel: vlan21: entered promiscuous mode
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9670] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9683] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9689] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9692] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9698] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9710] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9711] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9714] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9718] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9723] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9746] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9753] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9793] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9795] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9798] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9805] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9810] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 15 10:30:48 compute-0 NetworkManager[48915]: <info>  [1765794648.9814] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 15 10:30:50 compute-0 NetworkManager[48915]: <info>  [1765794650.1116] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51690 uid=0 result="success"
Dec 15 10:30:50 compute-0 NetworkManager[48915]: <info>  [1765794650.2763] checkpoint[0x55ad9f821950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec 15 10:30:50 compute-0 NetworkManager[48915]: <info>  [1765794650.2765] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51690 uid=0 result="success"
Dec 15 10:30:50 compute-0 sudo[52047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyfbuuqltwmbfzcmtwzfhqexqrgnmfll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794649.952975-845-30078438635615/AnsiballZ_async_status.py'
Dec 15 10:30:50 compute-0 sudo[52047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:50 compute-0 NetworkManager[48915]: <info>  [1765794650.5961] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51690 uid=0 result="success"
Dec 15 10:30:50 compute-0 NetworkManager[48915]: <info>  [1765794650.5976] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51690 uid=0 result="success"
Dec 15 10:30:50 compute-0 python3.9[52049]: ansible-ansible.legacy.async_status Invoked with jid=j686071933130.51684 mode=status _async_dir=/root/.ansible_async
Dec 15 10:30:50 compute-0 sudo[52047]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:50 compute-0 NetworkManager[48915]: <info>  [1765794650.8164] audit: op="networking-control" arg="global-dns-configuration" pid=51690 uid=0 result="success"
Dec 15 10:30:50 compute-0 NetworkManager[48915]: <info>  [1765794650.8210] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec 15 10:30:50 compute-0 NetworkManager[48915]: <info>  [1765794650.8240] audit: op="networking-control" arg="global-dns-configuration" pid=51690 uid=0 result="success"
Dec 15 10:30:50 compute-0 NetworkManager[48915]: <info>  [1765794650.8261] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51690 uid=0 result="success"
Dec 15 10:30:50 compute-0 NetworkManager[48915]: <info>  [1765794650.9692] checkpoint[0x55ad9f821a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec 15 10:30:50 compute-0 NetworkManager[48915]: <info>  [1765794650.9697] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51690 uid=0 result="success"
Dec 15 10:30:51 compute-0 ansible-async_wrapper.py[51688]: Module complete (51688)
Dec 15 10:30:51 compute-0 ansible-async_wrapper.py[51687]: Done in kid B.
Dec 15 10:30:53 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 15 10:30:53 compute-0 sudo[52154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osvgsvuawuqmqnkdjusodatvuzhvtove ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794649.952975-845-30078438635615/AnsiballZ_async_status.py'
Dec 15 10:30:53 compute-0 sudo[52154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:54 compute-0 python3.9[52156]: ansible-ansible.legacy.async_status Invoked with jid=j686071933130.51684 mode=status _async_dir=/root/.ansible_async
Dec 15 10:30:54 compute-0 sudo[52154]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:54 compute-0 sudo[52254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmzhoqykcxymyiurojmjedhuwlhdkcot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794649.952975-845-30078438635615/AnsiballZ_async_status.py'
Dec 15 10:30:54 compute-0 sudo[52254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:54 compute-0 python3.9[52256]: ansible-ansible.legacy.async_status Invoked with jid=j686071933130.51684 mode=cleanup _async_dir=/root/.ansible_async
Dec 15 10:30:54 compute-0 sudo[52254]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:55 compute-0 sudo[52406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qacpevzwhjaznakndhtrppmtrmjyagty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794655.4231608-926-37458936247/AnsiballZ_stat.py'
Dec 15 10:30:55 compute-0 sudo[52406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:56 compute-0 python3.9[52408]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:30:56 compute-0 sudo[52406]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:56 compute-0 sudo[52529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkqmmsldilnvprtsoqmbjdbryothalng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794655.4231608-926-37458936247/AnsiballZ_copy.py'
Dec 15 10:30:56 compute-0 sudo[52529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:56 compute-0 python3.9[52531]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765794655.4231608-926-37458936247/.source.returncode _original_basename=.pr6n80ft follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:30:56 compute-0 sudo[52529]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:57 compute-0 sudo[52681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypfhvtalpzooxuarcbrzcoyzudvxiydc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794656.9057028-974-100266257317630/AnsiballZ_stat.py'
Dec 15 10:30:57 compute-0 sudo[52681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:57 compute-0 python3.9[52683]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:30:57 compute-0 sudo[52681]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:57 compute-0 sudo[52805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcgkgnuioldyyzfvodkajimkxatixugh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794656.9057028-974-100266257317630/AnsiballZ_copy.py'
Dec 15 10:30:57 compute-0 sudo[52805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:58 compute-0 python3.9[52807]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765794656.9057028-974-100266257317630/.source.cfg _original_basename=.w076m_6w follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:30:58 compute-0 sudo[52805]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:58 compute-0 sudo[52957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inwbgtdgqnrzlmlaikkhbagdagpomqdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794658.3094146-1019-183258292442949/AnsiballZ_systemd.py'
Dec 15 10:30:58 compute-0 sudo[52957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:30:58 compute-0 python3.9[52959]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 15 10:30:58 compute-0 systemd[1]: Reloading Network Manager...
Dec 15 10:30:58 compute-0 NetworkManager[48915]: <info>  [1765794658.9720] audit: op="reload" arg="0" pid=52963 uid=0 result="success"
Dec 15 10:30:58 compute-0 NetworkManager[48915]: <info>  [1765794658.9727] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec 15 10:30:58 compute-0 systemd[1]: Reloaded Network Manager.
Dec 15 10:30:59 compute-0 sudo[52957]: pam_unix(sudo:session): session closed for user root
Dec 15 10:30:59 compute-0 sshd-session[44911]: Connection closed by 192.168.122.30 port 52062
Dec 15 10:30:59 compute-0 sshd-session[44908]: pam_unix(sshd:session): session closed for user zuul
Dec 15 10:30:59 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Dec 15 10:30:59 compute-0 systemd[1]: session-10.scope: Consumed 49.941s CPU time.
Dec 15 10:30:59 compute-0 systemd-logind[797]: Session 10 logged out. Waiting for processes to exit.
Dec 15 10:30:59 compute-0 systemd-logind[797]: Removed session 10.
Dec 15 10:31:05 compute-0 sshd-session[52994]: Accepted publickey for zuul from 192.168.122.30 port 38026 ssh2: ECDSA SHA256:RI7NGykFU6DY8VmZwamQwcvn1msDfj+uaMVMpAHHUfo
Dec 15 10:31:05 compute-0 systemd-logind[797]: New session 11 of user zuul.
Dec 15 10:31:05 compute-0 systemd[1]: Started Session 11 of User zuul.
Dec 15 10:31:05 compute-0 sshd-session[52994]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 15 10:31:06 compute-0 python3.9[53147]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 15 10:31:07 compute-0 python3.9[53302]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 15 10:31:08 compute-0 python3.9[53495]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:31:09 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 15 10:31:09 compute-0 sshd-session[52997]: Connection closed by 192.168.122.30 port 38026
Dec 15 10:31:09 compute-0 sshd-session[52994]: pam_unix(sshd:session): session closed for user zuul
Dec 15 10:31:09 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Dec 15 10:31:09 compute-0 systemd[1]: session-11.scope: Consumed 2.195s CPU time.
Dec 15 10:31:09 compute-0 systemd-logind[797]: Session 11 logged out. Waiting for processes to exit.
Dec 15 10:31:09 compute-0 systemd-logind[797]: Removed session 11.
Dec 15 10:31:14 compute-0 sshd-session[53525]: Accepted publickey for zuul from 192.168.122.30 port 35974 ssh2: ECDSA SHA256:RI7NGykFU6DY8VmZwamQwcvn1msDfj+uaMVMpAHHUfo
Dec 15 10:31:14 compute-0 systemd-logind[797]: New session 12 of user zuul.
Dec 15 10:31:14 compute-0 systemd[1]: Started Session 12 of User zuul.
Dec 15 10:31:14 compute-0 sshd-session[53525]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 15 10:31:15 compute-0 python3.9[53678]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 15 10:31:16 compute-0 python3.9[53832]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 15 10:31:17 compute-0 sudo[53987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adeyzwjooebkhhimpwfojpcbudpttduu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794677.022446-80-93871867064388/AnsiballZ_setup.py'
Dec 15 10:31:17 compute-0 sudo[53987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:17 compute-0 python3.9[53989]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 15 10:31:18 compute-0 sudo[53987]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:18 compute-0 sudo[54071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqxpkwdblscxzqjztgowflwtjerogpoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794677.022446-80-93871867064388/AnsiballZ_dnf.py'
Dec 15 10:31:18 compute-0 sudo[54071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:18 compute-0 python3.9[54073]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 15 10:31:20 compute-0 sudo[54071]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:20 compute-0 sudo[54225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nioigctgyopjlqfcyubkebvoqtoldcyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794680.5022247-116-14493535857378/AnsiballZ_setup.py'
Dec 15 10:31:20 compute-0 sudo[54225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:21 compute-0 python3.9[54227]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 15 10:31:21 compute-0 sudo[54225]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:22 compute-0 sudo[54420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mflwifxpqemzsteyeijtidbqrxgukxja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794681.6628847-149-124132368905538/AnsiballZ_file.py'
Dec 15 10:31:22 compute-0 sudo[54420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:22 compute-0 python3.9[54422]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:31:22 compute-0 sudo[54420]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:23 compute-0 sudo[54572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qblchfouwdopqyftawzslxtphdoxlgwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794682.7117655-173-75445707048532/AnsiballZ_command.py'
Dec 15 10:31:23 compute-0 sudo[54572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:23 compute-0 python3.9[54574]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:31:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1555565994-merged.mount: Deactivated successfully.
Dec 15 10:31:23 compute-0 podman[54575]: 2025-12-15 10:31:23.488664486 +0000 UTC m=+0.077649711 system refresh
Dec 15 10:31:23 compute-0 sudo[54572]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:24 compute-0 sudo[54737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqpkoodfguknqaruocnskgoleymwcmuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794683.666761-197-106280472310238/AnsiballZ_stat.py'
Dec 15 10:31:24 compute-0 sudo[54737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:24 compute-0 python3.9[54739]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:31:24 compute-0 sudo[54737]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:24 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 15 10:31:24 compute-0 sudo[54860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcubypgdmokqxxgmghyhltdvqgdrlicb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794683.666761-197-106280472310238/AnsiballZ_copy.py'
Dec 15 10:31:24 compute-0 sudo[54860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:25 compute-0 python3.9[54862]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765794683.666761-197-106280472310238/.source.json follow=False _original_basename=podman_network_config.j2 checksum=c63cd3e25aae018e8edbfcbb5f52b4f718db5738 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:31:25 compute-0 sudo[54860]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:25 compute-0 sudo[55012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dddrrhqgjxoawbwsdvbwovgegktdlzzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794685.2196963-242-109514941792933/AnsiballZ_stat.py'
Dec 15 10:31:25 compute-0 sudo[55012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:25 compute-0 python3.9[55014]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:31:25 compute-0 sudo[55012]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:26 compute-0 sudo[55135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lippirsjbhgwtpkkgyraavbisaiwdiar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794685.2196963-242-109514941792933/AnsiballZ_copy.py'
Dec 15 10:31:26 compute-0 sudo[55135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:26 compute-0 python3.9[55137]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765794685.2196963-242-109514941792933/.source.conf follow=False _original_basename=registries.conf.j2 checksum=ea7e71ddf075bf55e555c64399d15b2ffe005fe9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 15 10:31:26 compute-0 sudo[55135]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:27 compute-0 sudo[55287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hntvmqjlikuvqdaqhfbvqofojixmrsju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794686.53166-290-163287721211036/AnsiballZ_ini_file.py'
Dec 15 10:31:27 compute-0 sudo[55287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:27 compute-0 python3.9[55289]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 15 10:31:27 compute-0 sudo[55287]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:27 compute-0 sudo[55439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikgrutbbkxafvxeqsbnpwcepkdkcneql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794687.5252235-290-212215188496971/AnsiballZ_ini_file.py'
Dec 15 10:31:27 compute-0 sudo[55439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:27 compute-0 python3.9[55441]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 15 10:31:27 compute-0 sudo[55439]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:28 compute-0 sudo[55591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgkjtfdyhxcbcltwiouqqrdqrwcpwcgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794688.1270392-290-275701993134254/AnsiballZ_ini_file.py'
Dec 15 10:31:28 compute-0 sudo[55591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:28 compute-0 python3.9[55593]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 15 10:31:28 compute-0 sudo[55591]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:28 compute-0 sudo[55743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcaliiuxlkruwzpbfjgvjihfiyvulxjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794688.6990533-290-208014158913479/AnsiballZ_ini_file.py'
Dec 15 10:31:28 compute-0 sudo[55743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:29 compute-0 python3.9[55745]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 15 10:31:29 compute-0 sudo[55743]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:29 compute-0 sudo[55895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzjenfdkcgqpyvgznpjuozzcbjcctybx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794689.548055-383-255125961269171/AnsiballZ_dnf.py'
Dec 15 10:31:29 compute-0 sudo[55895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:30 compute-0 python3.9[55897]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 15 10:31:31 compute-0 sudo[55895]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:32 compute-0 sudo[56048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqgmxzjchwtvlqjdclrehrhzikboqbzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794692.212677-416-113243624799777/AnsiballZ_setup.py'
Dec 15 10:31:32 compute-0 sudo[56048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:32 compute-0 python3.9[56050]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 15 10:31:32 compute-0 sudo[56048]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:33 compute-0 sudo[56202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oytpmovkperjyovykwaerjcdqazkvxth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794693.0133677-440-122630926056404/AnsiballZ_stat.py'
Dec 15 10:31:33 compute-0 sudo[56202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:33 compute-0 python3.9[56204]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 15 10:31:33 compute-0 sudo[56202]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:34 compute-0 sudo[56354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biapihzbezxbxxsutbximfpqjzrnxinn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794693.7899501-467-135988244177088/AnsiballZ_stat.py'
Dec 15 10:31:34 compute-0 sudo[56354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:34 compute-0 python3.9[56356]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 15 10:31:34 compute-0 sudo[56354]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:34 compute-0 sudo[56506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aimmhngttnmwrdnjkcyqrpgrqttmtsjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794694.6365867-497-155580134188278/AnsiballZ_command.py'
Dec 15 10:31:34 compute-0 sudo[56506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:35 compute-0 python3.9[56508]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:31:35 compute-0 sudo[56506]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:35 compute-0 sudo[56659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlzkekuitrbxkxneqrbopmrqmykpejag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794695.4801943-527-13151047969155/AnsiballZ_service_facts.py'
Dec 15 10:31:35 compute-0 sudo[56659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:36 compute-0 python3.9[56661]: ansible-service_facts Invoked
Dec 15 10:31:36 compute-0 network[56678]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 15 10:31:36 compute-0 network[56679]: 'network-scripts' will be removed from distribution in near future.
Dec 15 10:31:36 compute-0 network[56680]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 15 10:31:38 compute-0 sudo[56659]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:40 compute-0 sudo[56963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beorixwbzrclqbpssxlcprrbarjfjcya ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1765794699.7552934-572-86716214680494/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1765794699.7552934-572-86716214680494/args'
Dec 15 10:31:40 compute-0 sudo[56963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:40 compute-0 sudo[56963]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:40 compute-0 sudo[57130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idxhbhosxzfrbbvimcvqqjpylmxhrfih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794700.4866803-605-220189815950334/AnsiballZ_dnf.py'
Dec 15 10:31:40 compute-0 sudo[57130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:41 compute-0 python3.9[57132]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 15 10:31:42 compute-0 sudo[57130]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:43 compute-0 sudo[57283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjsrnkmabkgsfmcxgxbeauejrhwmilhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794703.0218093-644-36044600772763/AnsiballZ_package_facts.py'
Dec 15 10:31:43 compute-0 sudo[57283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:43 compute-0 python3.9[57285]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 15 10:31:44 compute-0 sudo[57283]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:45 compute-0 sudo[57435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhzfatzylchsggvwtguzyrpqepybsbvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794704.8276553-674-202574540230299/AnsiballZ_stat.py'
Dec 15 10:31:45 compute-0 sudo[57435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:45 compute-0 python3.9[57437]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:31:45 compute-0 sudo[57435]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:45 compute-0 sudo[57560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylqvqoettyutuewyqqyuzzxetrataugy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794704.8276553-674-202574540230299/AnsiballZ_copy.py'
Dec 15 10:31:45 compute-0 sudo[57560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:45 compute-0 python3.9[57562]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765794704.8276553-674-202574540230299/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:31:45 compute-0 sudo[57560]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:46 compute-0 sudo[57714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afamiklfufowigpcmmjuzsqtthijoyvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794706.1513307-719-12754349681401/AnsiballZ_stat.py'
Dec 15 10:31:46 compute-0 sudo[57714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:46 compute-0 python3.9[57716]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:31:46 compute-0 sudo[57714]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:46 compute-0 sudo[57839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oheojlszyxoqhetttmmbtqcrmwmhrqxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794706.1513307-719-12754349681401/AnsiballZ_copy.py'
Dec 15 10:31:46 compute-0 sudo[57839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:47 compute-0 python3.9[57841]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765794706.1513307-719-12754349681401/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:31:47 compute-0 sudo[57839]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:48 compute-0 sudo[57993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acaxpqzezxvkwtdfrlsbuiclmbaemnba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794708.3071082-782-133824635684370/AnsiballZ_lineinfile.py'
Dec 15 10:31:48 compute-0 sudo[57993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:48 compute-0 python3.9[57995]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:31:48 compute-0 sudo[57993]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:50 compute-0 sudo[58147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pckataxbhvywrjqgnrergxfwupglpgis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794710.175266-827-159648559035767/AnsiballZ_setup.py'
Dec 15 10:31:50 compute-0 sudo[58147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:50 compute-0 python3.9[58149]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 15 10:31:50 compute-0 sudo[58147]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:51 compute-0 sudo[58231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fngtpdgdageudizydeemvsfhvlgakylr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794710.175266-827-159648559035767/AnsiballZ_systemd.py'
Dec 15 10:31:51 compute-0 sudo[58231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:51 compute-0 python3.9[58233]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 15 10:31:51 compute-0 sudo[58231]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:53 compute-0 sudo[58385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukvzshbwpayulrfoduyqqtcurmagrlqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794712.8523273-875-155925576283925/AnsiballZ_setup.py'
Dec 15 10:31:53 compute-0 sudo[58385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:53 compute-0 python3.9[58387]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 15 10:31:53 compute-0 sudo[58385]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:53 compute-0 sudo[58469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovbdezrvtlruuikvltitsoxptktkmjiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794712.8523273-875-155925576283925/AnsiballZ_systemd.py'
Dec 15 10:31:53 compute-0 sudo[58469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:31:54 compute-0 python3.9[58471]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 15 10:31:54 compute-0 chronyd[782]: chronyd exiting
Dec 15 10:31:54 compute-0 systemd[1]: Stopping NTP client/server...
Dec 15 10:31:54 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Dec 15 10:31:54 compute-0 systemd[1]: Stopped NTP client/server.
Dec 15 10:31:54 compute-0 systemd[1]: Starting NTP client/server...
Dec 15 10:31:54 compute-0 chronyd[58479]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 15 10:31:54 compute-0 chronyd[58479]: Frequency -32.117 +/- 0.240 ppm read from /var/lib/chrony/drift
Dec 15 10:31:54 compute-0 chronyd[58479]: Loaded seccomp filter (level 2)
Dec 15 10:31:54 compute-0 systemd[1]: Started NTP client/server.
Dec 15 10:31:54 compute-0 sudo[58469]: pam_unix(sudo:session): session closed for user root
Dec 15 10:31:54 compute-0 sshd-session[53528]: Connection closed by 192.168.122.30 port 35974
Dec 15 10:31:54 compute-0 sshd-session[53525]: pam_unix(sshd:session): session closed for user zuul
Dec 15 10:31:54 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Dec 15 10:31:54 compute-0 systemd[1]: session-12.scope: Consumed 24.495s CPU time.
Dec 15 10:31:54 compute-0 systemd-logind[797]: Session 12 logged out. Waiting for processes to exit.
Dec 15 10:31:54 compute-0 systemd-logind[797]: Removed session 12.
Dec 15 10:32:01 compute-0 anacron[7481]: Job `cron.weekly' started
Dec 15 10:32:01 compute-0 anacron[7481]: Job `cron.weekly' terminated
Dec 15 10:32:02 compute-0 sshd-session[58507]: Accepted publickey for zuul from 192.168.122.30 port 32818 ssh2: ECDSA SHA256:RI7NGykFU6DY8VmZwamQwcvn1msDfj+uaMVMpAHHUfo
Dec 15 10:32:02 compute-0 systemd-logind[797]: New session 13 of user zuul.
Dec 15 10:32:02 compute-0 systemd[1]: Started Session 13 of User zuul.
Dec 15 10:32:02 compute-0 sshd-session[58507]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 15 10:32:02 compute-0 sudo[58660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uatfpzdawhdeekzjykrfbiobjmkyahgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794722.5241954-26-107129396585166/AnsiballZ_file.py'
Dec 15 10:32:02 compute-0 sudo[58660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:03 compute-0 python3.9[58662]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:32:03 compute-0 sudo[58660]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:04 compute-0 sudo[58812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbskovjudnnzxwltgidzntmtscavffmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794723.6512976-62-216280915957358/AnsiballZ_stat.py'
Dec 15 10:32:04 compute-0 sudo[58812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:04 compute-0 python3.9[58814]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:32:04 compute-0 sudo[58812]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:04 compute-0 sudo[58935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oirfyslrjcrcgsqwyfygqnbszjwsswlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794723.6512976-62-216280915957358/AnsiballZ_copy.py'
Dec 15 10:32:04 compute-0 sudo[58935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:04 compute-0 python3.9[58937]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765794723.6512976-62-216280915957358/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:32:05 compute-0 sudo[58935]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:05 compute-0 sshd-session[58510]: Connection closed by 192.168.122.30 port 32818
Dec 15 10:32:05 compute-0 sshd-session[58507]: pam_unix(sshd:session): session closed for user zuul
Dec 15 10:32:05 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Dec 15 10:32:05 compute-0 systemd[1]: session-13.scope: Consumed 1.588s CPU time.
Dec 15 10:32:05 compute-0 systemd-logind[797]: Session 13 logged out. Waiting for processes to exit.
Dec 15 10:32:05 compute-0 systemd-logind[797]: Removed session 13.
Dec 15 10:32:11 compute-0 sshd-session[58962]: Accepted publickey for zuul from 192.168.122.30 port 57008 ssh2: ECDSA SHA256:RI7NGykFU6DY8VmZwamQwcvn1msDfj+uaMVMpAHHUfo
Dec 15 10:32:11 compute-0 systemd-logind[797]: New session 14 of user zuul.
Dec 15 10:32:11 compute-0 systemd[1]: Started Session 14 of User zuul.
Dec 15 10:32:11 compute-0 sshd-session[58962]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 15 10:32:12 compute-0 python3.9[59115]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 15 10:32:13 compute-0 sudo[59269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjvqtrjpeqlmglgieqhxwsmejkpkzmrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794732.706422-59-202888389952393/AnsiballZ_file.py'
Dec 15 10:32:13 compute-0 sudo[59269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:13 compute-0 python3.9[59271]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:32:13 compute-0 sudo[59269]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:14 compute-0 sudo[59444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhfclkbsgafnlnendrafinryjugjwsnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794733.5341537-83-65648043901477/AnsiballZ_stat.py'
Dec 15 10:32:14 compute-0 sudo[59444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:14 compute-0 python3.9[59446]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:32:14 compute-0 sudo[59444]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:14 compute-0 sudo[59567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcnfsfbvirwcahrlnfznnnprzwidjdgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794733.5341537-83-65648043901477/AnsiballZ_copy.py'
Dec 15 10:32:14 compute-0 sudo[59567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:14 compute-0 python3.9[59569]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1765794733.5341537-83-65648043901477/.source.json _original_basename=.h59ljnr1 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:32:14 compute-0 sudo[59567]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:15 compute-0 sudo[59719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtzpoamilbbexaofcpjrqactpzmynzvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794735.4381368-152-142167696552641/AnsiballZ_stat.py'
Dec 15 10:32:15 compute-0 sudo[59719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:15 compute-0 python3.9[59721]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:32:15 compute-0 sudo[59719]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:16 compute-0 sudo[59842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scazslewzgdtnghcpxdzfnenzurjulyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794735.4381368-152-142167696552641/AnsiballZ_copy.py'
Dec 15 10:32:16 compute-0 sudo[59842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:16 compute-0 python3.9[59844]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765794735.4381368-152-142167696552641/.source _original_basename=.qq6aoznr follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:32:16 compute-0 sudo[59842]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:17 compute-0 sudo[59994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blmlqsaxgodrqchwncalxuprffvhpdwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794736.7632592-200-131469235509838/AnsiballZ_file.py'
Dec 15 10:32:17 compute-0 sudo[59994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:17 compute-0 python3.9[59996]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 15 10:32:17 compute-0 sudo[59994]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:17 compute-0 sudo[60146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjirjsrmpidbsffuusotdkelozfyfyxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794737.4022865-224-163781239426692/AnsiballZ_stat.py'
Dec 15 10:32:17 compute-0 sudo[60146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:17 compute-0 python3.9[60148]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:32:17 compute-0 sudo[60146]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:18 compute-0 sudo[60269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pirkpowgomnatbrxooobvnogbyofmxkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794737.4022865-224-163781239426692/AnsiballZ_copy.py'
Dec 15 10:32:18 compute-0 sudo[60269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:18 compute-0 python3.9[60271]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765794737.4022865-224-163781239426692/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 15 10:32:18 compute-0 sudo[60269]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:18 compute-0 sudo[60421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-polqqktygaebtfypherbgmnysjioftam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794738.6298935-224-46345937794481/AnsiballZ_stat.py'
Dec 15 10:32:18 compute-0 sudo[60421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:19 compute-0 python3.9[60423]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:32:19 compute-0 sudo[60421]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:19 compute-0 sudo[60544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoygcutuxmvspqfxijkxpczzzlcnmqzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794738.6298935-224-46345937794481/AnsiballZ_copy.py'
Dec 15 10:32:19 compute-0 sudo[60544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:19 compute-0 python3.9[60546]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765794738.6298935-224-46345937794481/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 15 10:32:19 compute-0 sudo[60544]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:20 compute-0 sudo[60696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trrydvgwksboasdqeswsjmyqxqjulcnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794739.7751713-311-165988599684628/AnsiballZ_file.py'
Dec 15 10:32:20 compute-0 sudo[60696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:20 compute-0 python3.9[60698]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:32:20 compute-0 sudo[60696]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:20 compute-0 sudo[60848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeughrrhvwmcsnmrgasiauwxqrfswiux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794740.4241085-335-11633158293343/AnsiballZ_stat.py'
Dec 15 10:32:20 compute-0 sudo[60848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:20 compute-0 python3.9[60850]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:32:20 compute-0 sudo[60848]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:21 compute-0 sudo[60971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmoucdlectitetkstbtfrjvltauoazfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794740.4241085-335-11633158293343/AnsiballZ_copy.py'
Dec 15 10:32:21 compute-0 sudo[60971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:21 compute-0 python3.9[60973]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765794740.4241085-335-11633158293343/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:32:21 compute-0 sudo[60971]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:21 compute-0 sudo[61123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sorslduziqmjmvjqyhgkykmaswghkiyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794741.6443124-380-146780620844145/AnsiballZ_stat.py'
Dec 15 10:32:21 compute-0 sudo[61123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:22 compute-0 python3.9[61125]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:32:22 compute-0 sudo[61123]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:22 compute-0 sudo[61246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrxsalfmhzxoeqqbffxvsgqsamejdzlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794741.6443124-380-146780620844145/AnsiballZ_copy.py'
Dec 15 10:32:22 compute-0 sudo[61246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:22 compute-0 python3.9[61248]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765794741.6443124-380-146780620844145/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:32:22 compute-0 sudo[61246]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:23 compute-0 sudo[61398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usuuttxkotdvhurpnskfvivlblpivwai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794742.8561769-425-20461052012469/AnsiballZ_systemd.py'
Dec 15 10:32:23 compute-0 sudo[61398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:23 compute-0 python3.9[61400]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 15 10:32:23 compute-0 systemd[1]: Reloading.
Dec 15 10:32:23 compute-0 systemd-rc-local-generator[61425]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:32:23 compute-0 systemd-sysv-generator[61428]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:32:24 compute-0 systemd[1]: Reloading.
Dec 15 10:32:24 compute-0 systemd-rc-local-generator[61465]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:32:24 compute-0 systemd-sysv-generator[61469]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:32:24 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Dec 15 10:32:24 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Dec 15 10:32:24 compute-0 sudo[61398]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:24 compute-0 sudo[61625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbsjuvtvlqpftxjmxgtlzbioalavjavr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794744.5678701-449-263816354565102/AnsiballZ_stat.py'
Dec 15 10:32:24 compute-0 sudo[61625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:25 compute-0 python3.9[61627]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:32:25 compute-0 sudo[61625]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:25 compute-0 sudo[61748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lishfzidiygrtlrossdxvwjkropmsncv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794744.5678701-449-263816354565102/AnsiballZ_copy.py'
Dec 15 10:32:25 compute-0 sudo[61748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:25 compute-0 python3.9[61750]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765794744.5678701-449-263816354565102/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:32:25 compute-0 sudo[61748]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:26 compute-0 sudo[61900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpejkgvfaamwogmblqvnbigdselavdgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794745.8348806-494-126109988393617/AnsiballZ_stat.py'
Dec 15 10:32:26 compute-0 sudo[61900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:26 compute-0 python3.9[61902]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:32:26 compute-0 sudo[61900]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:26 compute-0 sudo[62023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmpivdonbaeumjjqgekjjykwtsirqqqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794745.8348806-494-126109988393617/AnsiballZ_copy.py'
Dec 15 10:32:26 compute-0 sudo[62023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:26 compute-0 python3.9[62025]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765794745.8348806-494-126109988393617/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:32:26 compute-0 sudo[62023]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:27 compute-0 sudo[62175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzbmihcxgkvosivkjaxgvxudngdroyiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794747.0222223-539-18716406470445/AnsiballZ_systemd.py'
Dec 15 10:32:27 compute-0 sudo[62175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:27 compute-0 python3.9[62177]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 15 10:32:27 compute-0 systemd[1]: Reloading.
Dec 15 10:32:27 compute-0 systemd-rc-local-generator[62208]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:32:27 compute-0 systemd-sysv-generator[62212]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:32:27 compute-0 systemd[1]: Reloading.
Dec 15 10:32:27 compute-0 systemd-rc-local-generator[62243]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:32:27 compute-0 systemd-sysv-generator[62247]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:32:28 compute-0 systemd[1]: Starting Create netns directory...
Dec 15 10:32:28 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 15 10:32:28 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 15 10:32:28 compute-0 systemd[1]: Finished Create netns directory.
Dec 15 10:32:28 compute-0 sudo[62175]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:28 compute-0 python3.9[62404]: ansible-ansible.builtin.service_facts Invoked
Dec 15 10:32:28 compute-0 network[62421]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 15 10:32:28 compute-0 network[62422]: 'network-scripts' will be removed from distribution in near future.
Dec 15 10:32:28 compute-0 network[62423]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 15 10:32:32 compute-0 sudo[62683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qculxqtzabbaqmvlsmnujdbnvvondora ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794752.591832-587-216896683578357/AnsiballZ_systemd.py'
Dec 15 10:32:32 compute-0 sudo[62683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:33 compute-0 python3.9[62685]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 15 10:32:33 compute-0 systemd[1]: Reloading.
Dec 15 10:32:33 compute-0 systemd-rc-local-generator[62713]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:32:33 compute-0 systemd-sysv-generator[62718]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:32:33 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Dec 15 10:32:33 compute-0 iptables.init[62725]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec 15 10:32:33 compute-0 iptables.init[62725]: iptables: Flushing firewall rules: [  OK  ]
Dec 15 10:32:33 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Dec 15 10:32:33 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Dec 15 10:32:33 compute-0 sudo[62683]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:34 compute-0 sudo[62919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnviemtavlsiqlpsfflbfsmabladkkek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794753.9432323-587-236829100316557/AnsiballZ_systemd.py'
Dec 15 10:32:34 compute-0 sudo[62919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:34 compute-0 python3.9[62921]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 15 10:32:34 compute-0 sudo[62919]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:35 compute-0 sudo[63073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jktiklzfxdxdrbxpfoiepqndyqslutaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794754.8488293-635-172625430306875/AnsiballZ_systemd.py'
Dec 15 10:32:35 compute-0 sudo[63073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:35 compute-0 python3.9[63075]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 15 10:32:35 compute-0 systemd[1]: Reloading.
Dec 15 10:32:35 compute-0 systemd-sysv-generator[63102]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:32:35 compute-0 systemd-rc-local-generator[63099]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:32:35 compute-0 systemd[1]: Starting Netfilter Tables...
Dec 15 10:32:35 compute-0 systemd[1]: Finished Netfilter Tables.
Dec 15 10:32:35 compute-0 sudo[63073]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:36 compute-0 sudo[63265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixfpasbdrtbpxpvotboaoswxxwotevcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794755.9740705-659-280002335661536/AnsiballZ_command.py'
Dec 15 10:32:36 compute-0 sudo[63265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:36 compute-0 python3.9[63267]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:32:36 compute-0 sudo[63265]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:37 compute-0 sudo[63418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujwazmbzsrpokgmrvjhosbscpnqbzkay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794757.1524415-701-68096291879011/AnsiballZ_stat.py'
Dec 15 10:32:37 compute-0 sudo[63418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:37 compute-0 python3.9[63420]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:32:37 compute-0 sudo[63418]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:37 compute-0 sudo[63543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnteydzkyzyprxqttadrhvygqiiheejj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794757.1524415-701-68096291879011/AnsiballZ_copy.py'
Dec 15 10:32:37 compute-0 sudo[63543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:38 compute-0 python3.9[63545]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765794757.1524415-701-68096291879011/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:32:38 compute-0 sudo[63543]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:38 compute-0 sudo[63696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcauwnljfaayxiqqmooevonxiinqlxli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794758.468294-746-152968529713967/AnsiballZ_systemd.py'
Dec 15 10:32:38 compute-0 sudo[63696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:39 compute-0 python3.9[63698]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 15 10:32:39 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Dec 15 10:32:39 compute-0 sshd[1008]: Received SIGHUP; restarting.
Dec 15 10:32:39 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Dec 15 10:32:39 compute-0 sshd[1008]: Server listening on 0.0.0.0 port 22.
Dec 15 10:32:39 compute-0 sshd[1008]: Server listening on :: port 22.
Dec 15 10:32:39 compute-0 sudo[63696]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:39 compute-0 sudo[63852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aodxnktsjqiojbivvwgllvkabocnbrrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794759.3618362-770-39384409535699/AnsiballZ_file.py'
Dec 15 10:32:39 compute-0 sudo[63852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:39 compute-0 python3.9[63854]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:32:39 compute-0 sudo[63852]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:40 compute-0 sudo[64004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mewxuxvpuocqtevldjutmrhwimnoddsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794760.0269957-794-217340434936717/AnsiballZ_stat.py'
Dec 15 10:32:40 compute-0 sudo[64004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:40 compute-0 python3.9[64006]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:32:40 compute-0 sudo[64004]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:40 compute-0 sudo[64127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjawxrlcmmngmecyotanpptafpmifvoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794760.0269957-794-217340434936717/AnsiballZ_copy.py'
Dec 15 10:32:40 compute-0 sudo[64127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:41 compute-0 python3.9[64129]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765794760.0269957-794-217340434936717/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:32:41 compute-0 sudo[64127]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:42 compute-0 sudo[64279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whlqebrhjedikvqytkutlrmfamffnuuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794761.5794475-848-182851418189676/AnsiballZ_timezone.py'
Dec 15 10:32:42 compute-0 sudo[64279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:42 compute-0 python3.9[64281]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 15 10:32:42 compute-0 systemd[1]: Starting Time & Date Service...
Dec 15 10:32:42 compute-0 systemd[1]: Started Time & Date Service.
Dec 15 10:32:42 compute-0 sudo[64279]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:42 compute-0 sudo[64435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzpwnswjwzomtgcaicyrkrtlheewzdhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794762.623356-875-213864876804369/AnsiballZ_file.py'
Dec 15 10:32:42 compute-0 sudo[64435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:43 compute-0 python3.9[64437]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:32:43 compute-0 sudo[64435]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:43 compute-0 sudo[64587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yafjjeigxedmhshmjlcbbympgtfsesmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794763.2868857-899-140902117181134/AnsiballZ_stat.py'
Dec 15 10:32:43 compute-0 sudo[64587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:43 compute-0 python3.9[64589]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:32:43 compute-0 sudo[64587]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:44 compute-0 sudo[64710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opzznwcpqwbcprdqllbpvjpxjkyfegfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794763.2868857-899-140902117181134/AnsiballZ_copy.py'
Dec 15 10:32:44 compute-0 sudo[64710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:44 compute-0 python3.9[64712]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765794763.2868857-899-140902117181134/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:32:44 compute-0 sudo[64710]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:44 compute-0 sudo[64862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzemijmxgqxdstqxneyinwgfnnjpyiob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794764.5043252-944-216182307270318/AnsiballZ_stat.py'
Dec 15 10:32:44 compute-0 sudo[64862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:44 compute-0 python3.9[64864]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:32:44 compute-0 sudo[64862]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:45 compute-0 sudo[64985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdnbllesyxmsblyhidqbmyjzqtcaixnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794764.5043252-944-216182307270318/AnsiballZ_copy.py'
Dec 15 10:32:45 compute-0 sudo[64985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:45 compute-0 python3.9[64987]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765794764.5043252-944-216182307270318/.source.yaml _original_basename=.w64w0coq follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:32:45 compute-0 sudo[64985]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:45 compute-0 sudo[65137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jutwzzlilzbycdeuqckoblzthsyjnckp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794765.7029548-989-199060262097872/AnsiballZ_stat.py'
Dec 15 10:32:45 compute-0 sudo[65137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:46 compute-0 python3.9[65139]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:32:46 compute-0 sudo[65137]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:46 compute-0 sudo[65260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwlnxeavkmrbadsuppqvwquxzsuhfcwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794765.7029548-989-199060262097872/AnsiballZ_copy.py'
Dec 15 10:32:46 compute-0 sudo[65260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:46 compute-0 python3.9[65262]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765794765.7029548-989-199060262097872/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:32:46 compute-0 sudo[65260]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:47 compute-0 sudo[65412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhtwxrblepcnycjogdweswjiymdfmcqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794766.8509822-1034-236819996236546/AnsiballZ_command.py'
Dec 15 10:32:47 compute-0 sudo[65412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:47 compute-0 python3.9[65414]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:32:47 compute-0 sudo[65412]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:47 compute-0 sudo[65565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnvjwhsdrqrisplsdtmsydwkfsnzegum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794767.5238156-1058-107774311381781/AnsiballZ_command.py'
Dec 15 10:32:47 compute-0 sudo[65565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:47 compute-0 python3.9[65567]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:32:48 compute-0 sudo[65565]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:48 compute-0 sudo[65718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnrtcejvqsxvntumpufxdckpzojzgplf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765794768.270803-1082-133059663816026/AnsiballZ_edpm_nftables_from_files.py'
Dec 15 10:32:48 compute-0 sudo[65718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:48 compute-0 python3[65720]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 15 10:32:48 compute-0 sudo[65718]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:49 compute-0 sudo[65870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtqzshjcxihbvwcqhovdqyuevpnerimt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794769.1157813-1106-188729471096529/AnsiballZ_stat.py'
Dec 15 10:32:49 compute-0 sudo[65870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:49 compute-0 python3.9[65872]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:32:49 compute-0 sudo[65870]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:49 compute-0 sudo[65993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdmecbifvsamvmnfjzwaloetwqwpgtdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794769.1157813-1106-188729471096529/AnsiballZ_copy.py'
Dec 15 10:32:49 compute-0 sudo[65993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:50 compute-0 python3.9[65995]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765794769.1157813-1106-188729471096529/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:32:50 compute-0 sudo[65993]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:50 compute-0 sudo[66145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwvsgrcijfwhvtsvqeolpxrpksyqpldq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794770.367928-1151-22566257458543/AnsiballZ_stat.py'
Dec 15 10:32:50 compute-0 sudo[66145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:50 compute-0 python3.9[66147]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:32:50 compute-0 sudo[66145]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:51 compute-0 sudo[66268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bphzdraerjwsyrbeatxgiadfxdswyfhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794770.367928-1151-22566257458543/AnsiballZ_copy.py'
Dec 15 10:32:51 compute-0 sudo[66268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:51 compute-0 python3.9[66270]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765794770.367928-1151-22566257458543/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:32:51 compute-0 sudo[66268]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:51 compute-0 sudo[66420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ileyrhgnmizwjqsqriyillnugfuktsqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794771.614473-1196-16509951993085/AnsiballZ_stat.py'
Dec 15 10:32:51 compute-0 sudo[66420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:52 compute-0 python3.9[66422]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:32:52 compute-0 sudo[66420]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:52 compute-0 sudo[66543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifzcmzpurfdxnadqitjwtmryeygqjvgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794771.614473-1196-16509951993085/AnsiballZ_copy.py'
Dec 15 10:32:52 compute-0 sudo[66543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:52 compute-0 python3.9[66545]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765794771.614473-1196-16509951993085/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:32:52 compute-0 sudo[66543]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:53 compute-0 sudo[66695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsxfkrsycnpvgyilubcuafqowrgkyszb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794773.028931-1241-140191545014277/AnsiballZ_stat.py'
Dec 15 10:32:53 compute-0 sudo[66695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:53 compute-0 python3.9[66697]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:32:53 compute-0 sudo[66695]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:53 compute-0 sudo[66818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjcfmmqaqvtexuhrxvlsdjcaqcoivbsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794773.028931-1241-140191545014277/AnsiballZ_copy.py'
Dec 15 10:32:53 compute-0 sudo[66818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:54 compute-0 python3.9[66820]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765794773.028931-1241-140191545014277/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:32:54 compute-0 sudo[66818]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:54 compute-0 sudo[66970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryqhkyeywhlaopewibvqhtaabefzjpxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794774.2783644-1286-193963794516115/AnsiballZ_stat.py'
Dec 15 10:32:54 compute-0 sudo[66970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:54 compute-0 python3.9[66972]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 15 10:32:54 compute-0 sudo[66970]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:55 compute-0 sudo[67093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqevazlalnnykhitrleuuqkadrfwdqdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794774.2783644-1286-193963794516115/AnsiballZ_copy.py'
Dec 15 10:32:55 compute-0 sudo[67093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:55 compute-0 python3.9[67095]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765794774.2783644-1286-193963794516115/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:32:55 compute-0 sudo[67093]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:55 compute-0 sudo[67245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcytkopdllyswgcsimwllehedpsdwbbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794775.5720623-1331-235134795555544/AnsiballZ_file.py'
Dec 15 10:32:55 compute-0 sudo[67245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:56 compute-0 python3.9[67247]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:32:56 compute-0 sudo[67245]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:56 compute-0 sudo[67398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnaaqrrxdzmgfgcsqhnugobjajpitwau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794776.2185254-1355-257664821196845/AnsiballZ_command.py'
Dec 15 10:32:56 compute-0 sudo[67398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:56 compute-0 python3.9[67400]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:32:56 compute-0 sudo[67398]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:57 compute-0 sudo[67557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-easbhocvbfvshjkvshtoqcpxczffcrhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794777.0201397-1379-177978666222955/AnsiballZ_blockinfile.py'
Dec 15 10:32:57 compute-0 sudo[67557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:57 compute-0 python3.9[67559]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:32:57 compute-0 sudo[67557]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:58 compute-0 sudo[67710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xebgydjyjpdmwhrweuljzjljeqlcqpjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794777.9701927-1406-59016665237696/AnsiballZ_file.py'
Dec 15 10:32:58 compute-0 sudo[67710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:58 compute-0 python3.9[67712]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:32:58 compute-0 sudo[67710]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:58 compute-0 sudo[67862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmuyfkvovcxqyabanfhmldsfzxvamwzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794778.5459278-1406-176563721154067/AnsiballZ_file.py'
Dec 15 10:32:58 compute-0 sudo[67862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:59 compute-0 python3.9[67864]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:32:59 compute-0 sudo[67862]: pam_unix(sudo:session): session closed for user root
Dec 15 10:32:59 compute-0 sudo[68014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxtlatsczyxtcvaukqtainuwuaaftzhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794779.2706172-1451-145866557274971/AnsiballZ_mount.py'
Dec 15 10:32:59 compute-0 sudo[68014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:32:59 compute-0 python3.9[68016]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 15 10:32:59 compute-0 sudo[68014]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:00 compute-0 sudo[68167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daefsljgfjylmkvursrfdqthdbkijoxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794780.0903692-1451-196801244980436/AnsiballZ_mount.py'
Dec 15 10:33:00 compute-0 sudo[68167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:00 compute-0 python3.9[68169]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 15 10:33:00 compute-0 sudo[68167]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:01 compute-0 sshd-session[58965]: Connection closed by 192.168.122.30 port 57008
Dec 15 10:33:01 compute-0 sshd-session[58962]: pam_unix(sshd:session): session closed for user zuul
Dec 15 10:33:01 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Dec 15 10:33:01 compute-0 systemd[1]: session-14.scope: Consumed 33.631s CPU time.
Dec 15 10:33:01 compute-0 systemd-logind[797]: Session 14 logged out. Waiting for processes to exit.
Dec 15 10:33:01 compute-0 systemd-logind[797]: Removed session 14.
Dec 15 10:33:09 compute-0 sshd-session[68196]: Accepted publickey for zuul from 192.168.122.30 port 59570 ssh2: ECDSA SHA256:RI7NGykFU6DY8VmZwamQwcvn1msDfj+uaMVMpAHHUfo
Dec 15 10:33:09 compute-0 systemd-logind[797]: New session 15 of user zuul.
Dec 15 10:33:09 compute-0 systemd[1]: Started Session 15 of User zuul.
Dec 15 10:33:09 compute-0 sshd-session[68196]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 15 10:33:09 compute-0 sudo[68349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vascvikutvasgvulzvmcdwqamqhckxwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794789.2679887-18-174096531579932/AnsiballZ_tempfile.py'
Dec 15 10:33:09 compute-0 sudo[68349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:09 compute-0 python3.9[68351]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 15 10:33:09 compute-0 sudo[68349]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:10 compute-0 sudo[68501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-totqiadsryeyupeafqongfrtuskkfnqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794790.1158562-54-179891998481439/AnsiballZ_stat.py'
Dec 15 10:33:10 compute-0 sudo[68501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:10 compute-0 python3.9[68503]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 15 10:33:10 compute-0 sudo[68501]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:11 compute-0 sudo[68653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otrlmfepkbqscllyvzfbamhhdobduobn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794791.0017943-84-46973880741067/AnsiballZ_setup.py'
Dec 15 10:33:11 compute-0 sudo[68653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:11 compute-0 python3.9[68655]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 15 10:33:11 compute-0 sudo[68653]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:12 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 15 10:33:12 compute-0 sudo[68807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrwddaquuscsugxooalexdkyudacnpdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794792.1348753-109-277793467285024/AnsiballZ_blockinfile.py'
Dec 15 10:33:12 compute-0 sudo[68807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:12 compute-0 python3.9[68809]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhpsOQEumqGRPV8Xh6AFZPRvyyfXML3f0/wjnJON+qFu4zM9i7oB7zmcqstFQT4uX18ZS0YqJO/4/ryBPAZZOFG2L0bST8pKItAo/EGfGFAH5g/rlQxImKBGJDdUkvzzreKXEq3561cqOQ201XRZeDFgo+XR+vnP2QLXT3fxAZq6ctdbWEhlbwTHbwsMqTxXOBOOxq/ZpDQCohQVHvu2gR6+zxTKDzldWDQKY4ztjIWcsZbG7ZKjvhPwuMhI9d413rPhLIAdnEFVIPVHr2Uy5OpuPM24bMMyDYJLDeOrP/t1KwW/fjF1Haq7+O8cwPJjNmXz+shKyKvZPqSrb5KkjUPL0MeYi7ak0OMxkoGBvZi6PzePAgjMLqvzU+66oztZIiyWWO2LoMWTGglt+Whf8UT46CGt+pq3d76bUGi20Eqmu9Z6mLwFJfNKFW9hrIN3B22lBda64bUpgkBTtyL61sQecvhshKQ59886E/McQSbtnwrDqPUnBeT0ocwPgBTKc=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKmOBWDyZb3c+aXZR04GbPpRmjwnOkb5HnAaWmFgvuKt
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBmHBVlA5uHiVuqPz1bHxnOBtIolgkn41UmJM78VlGvRQkHOmQLJCf0YIr1o5C0I9BVEeztgT3yVdsVF7WthK00=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9syzFS6fY85LHnsYWKySOl7rUDErL2H+RwegHr9vyAKidKthTZsisMdZSFtk5ZKNZ7+Qb5EA7GUczjFJbT/brDUx57sgRoCXF40Fk2Fw48HGJYp31kSSKa3h1VyeTe0fyVVKEjPNb66uQ+bZvJKQltL4Hc6X5eQlkTvys4VwNNjaaa6YHGhpRVabaO8Ho93SRsiJH+CsdTz7jDXRjUPRdMFFxfKA48Bs4K5z6ZKT06e5gsek588DfDKn2GPeepslVlSuCZvOsIbaONb0zZCe0vUE8mo/RbwzNELnfP6TGkxMs0tlE1c13vlxX4qItjjpH6tI3rQvvwyGTGIwzank4kY1imUWPyt8NnLWwSs/dPtCKh38wvnBWQsKEaNlc1aeKYUg6HcCvfrMIKylU4EVbARC4t2NF9QTQNSTmSKqXn+5vMulVaqSankUFcDUWOmKfq1CG4n5Jv3WXKstZD8WlwdqxCfzVcWbNGPNqXOtb4yo+Rs4ez2Aek8+C0GKXWrc=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEbOqjJRBNuhjcNZlcxAvdlyb0V+p1AbBmOo53BDH1YG
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJnt1BECmspgFuAg/sXSDx8pwePB/rZDS9qwhcVL5XIeieGewRvKDPioiJGNKJbpTSV5nXyZD6m+9M4gAgTSXX8=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZ+xRYEtEBoNnWnr2ccnQRqT407msl+WUKh7fGq5TeOsO74R/k1s49f1ICWID/DVmfSyXhIATCaQLjiANootszkwFvUp6eTTfNIQj7lIGpOVM0RPWoUyQHo2gkdFbsOdL7mZVj8+d5A//nbaIlSC5vAFz1TSthrxvaaroQ6eSHeLM4yMIzqPLG73ugoAyQanutJ1cF+ZYQuhfLEL5D99UOSpinPiHHQKAn0T/ClJ8wxFW4ZhINRbjZNuCSFhTwyEfE3/sfqvdL2hv8CQ0fy3r3SBpL9pZ2M5Scvmx1ykhWevZH3QEXm0lHQQnXYIgyHYBQJtoNP1OgfuqiENqEgxeUS+s04/vsZzeWIXZdYazKsxn2LYNohoIO7GW+MgGCCYXtPnGEZ6aqUIGSBLGhchBF/KI/v3VcWEpkR5vVrVHyZ5DlF0pfGFVRv9+EgvNe1WZBeSD47l5Rh6+r4+8leCvgfhWVWXm1aKvm9xUBQW+VDZ3iSjI4BBx56RmF5XJQQGM=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIITQlOiOJwFkTPvCpfcFoA29MjUorFlU/zgJ92LSO1Mw
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLELR7pkSR/uOE62m8AFu0lpAWX6A3pNU28SYb68x5F67VZGXSIVIS5hMZE7epQYQmHmhti3qfKlI0yah8/SnSo=
                                             create=True mode=0644 path=/tmp/ansible.x1cjnerv state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:33:12 compute-0 sudo[68807]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:13 compute-0 sudo[68959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbrxnnpmmpiworoslkbcgccwwkokicdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794792.8946433-133-8437752722357/AnsiballZ_command.py'
Dec 15 10:33:13 compute-0 sudo[68959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:13 compute-0 python3.9[68961]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.x1cjnerv' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:33:13 compute-0 sudo[68959]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:14 compute-0 sudo[69113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvqmmgjtbhqfcqdzsuofkjtoioeesrmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794793.7278392-157-40413748531446/AnsiballZ_file.py'
Dec 15 10:33:14 compute-0 sudo[69113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:14 compute-0 python3.9[69115]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.x1cjnerv state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:33:14 compute-0 sudo[69113]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:14 compute-0 sshd-session[68199]: Connection closed by 192.168.122.30 port 59570
Dec 15 10:33:14 compute-0 sshd-session[68196]: pam_unix(sshd:session): session closed for user zuul
Dec 15 10:33:14 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Dec 15 10:33:14 compute-0 systemd[1]: session-15.scope: Consumed 3.118s CPU time.
Dec 15 10:33:14 compute-0 systemd-logind[797]: Session 15 logged out. Waiting for processes to exit.
Dec 15 10:33:14 compute-0 systemd-logind[797]: Removed session 15.
Dec 15 10:33:23 compute-0 sshd-session[69140]: Accepted publickey for zuul from 192.168.122.30 port 37812 ssh2: ECDSA SHA256:RI7NGykFU6DY8VmZwamQwcvn1msDfj+uaMVMpAHHUfo
Dec 15 10:33:23 compute-0 systemd-logind[797]: New session 16 of user zuul.
Dec 15 10:33:23 compute-0 systemd[1]: Started Session 16 of User zuul.
Dec 15 10:33:23 compute-0 sshd-session[69140]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 15 10:33:24 compute-0 python3.9[69293]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 15 10:33:25 compute-0 sudo[69447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtbigleccvvhpmbfqmmmdigyvmunbtlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794804.555626-56-93754783294043/AnsiballZ_systemd.py'
Dec 15 10:33:25 compute-0 sudo[69447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:25 compute-0 python3.9[69449]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 15 10:33:25 compute-0 sudo[69447]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:25 compute-0 sudo[69601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvdqodbalzndmvyteswcztxxlhsuoimj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794805.6761894-80-221730262667272/AnsiballZ_systemd.py'
Dec 15 10:33:25 compute-0 sudo[69601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:26 compute-0 python3.9[69603]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 15 10:33:26 compute-0 sudo[69601]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:26 compute-0 sudo[69754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejojyxclirvhhnwptciibzhozrvbkkfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794806.55674-107-189110725981287/AnsiballZ_command.py'
Dec 15 10:33:26 compute-0 sudo[69754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:27 compute-0 python3.9[69756]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:33:27 compute-0 sudo[69754]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:27 compute-0 sudo[69907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnbrehzxhzuntxsrdtxwgmtvaagdstrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794807.3668272-131-187425029452774/AnsiballZ_stat.py'
Dec 15 10:33:27 compute-0 sudo[69907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:27 compute-0 python3.9[69909]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 15 10:33:27 compute-0 sudo[69907]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:28 compute-0 sudo[70061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-issgryjbfjzhafyzeqfqrcktrpetfrhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794808.1447742-155-178847443924197/AnsiballZ_command.py'
Dec 15 10:33:28 compute-0 sudo[70061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:28 compute-0 python3.9[70063]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:33:28 compute-0 sudo[70061]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:29 compute-0 sudo[70216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxofigplhgfkshxzuomityqkuzzwrrnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794808.7975829-179-167014911959868/AnsiballZ_file.py'
Dec 15 10:33:29 compute-0 sudo[70216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:29 compute-0 python3.9[70218]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:33:29 compute-0 sudo[70216]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:29 compute-0 sshd-session[69143]: Connection closed by 192.168.122.30 port 37812
Dec 15 10:33:29 compute-0 sshd-session[69140]: pam_unix(sshd:session): session closed for user zuul
Dec 15 10:33:29 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Dec 15 10:33:29 compute-0 systemd[1]: session-16.scope: Consumed 4.150s CPU time.
Dec 15 10:33:29 compute-0 systemd-logind[797]: Session 16 logged out. Waiting for processes to exit.
Dec 15 10:33:29 compute-0 systemd-logind[797]: Removed session 16.
Dec 15 10:33:35 compute-0 sshd-session[70243]: Accepted publickey for zuul from 192.168.122.30 port 55036 ssh2: ECDSA SHA256:RI7NGykFU6DY8VmZwamQwcvn1msDfj+uaMVMpAHHUfo
Dec 15 10:33:35 compute-0 systemd-logind[797]: New session 17 of user zuul.
Dec 15 10:33:35 compute-0 systemd[1]: Started Session 17 of User zuul.
Dec 15 10:33:35 compute-0 sshd-session[70243]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 15 10:33:36 compute-0 python3.9[70396]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 15 10:33:37 compute-0 sudo[70550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kalvnlykdgoefkyvizexpvpczslufohk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794816.953235-62-262341759894806/AnsiballZ_setup.py'
Dec 15 10:33:37 compute-0 sudo[70550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:37 compute-0 python3.9[70552]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 15 10:33:37 compute-0 sudo[70550]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:38 compute-0 sudo[70634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcsyilwmyuhprugxkmsjtihlzwrfnbbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765794816.953235-62-262341759894806/AnsiballZ_dnf.py'
Dec 15 10:33:38 compute-0 sudo[70634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:38 compute-0 python3.9[70636]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 15 10:33:39 compute-0 sudo[70634]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:40 compute-0 python3.9[70787]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:33:41 compute-0 python3.9[70938]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 15 10:33:42 compute-0 python3.9[71088]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 15 10:33:42 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 15 10:33:43 compute-0 python3.9[71239]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 15 10:33:43 compute-0 sshd-session[70246]: Connection closed by 192.168.122.30 port 55036
Dec 15 10:33:43 compute-0 sshd-session[70243]: pam_unix(sshd:session): session closed for user zuul
Dec 15 10:33:43 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Dec 15 10:33:43 compute-0 systemd[1]: session-17.scope: Consumed 5.790s CPU time.
Dec 15 10:33:43 compute-0 systemd-logind[797]: Session 17 logged out. Waiting for processes to exit.
Dec 15 10:33:43 compute-0 systemd-logind[797]: Removed session 17.
Dec 15 10:33:52 compute-0 sshd-session[71264]: Accepted publickey for zuul from 38.102.83.199 port 36770 ssh2: RSA SHA256:oZqduNAfzNWmRCVtuNqNdfr90suxrjNE8fVduO6X/mo
Dec 15 10:33:52 compute-0 systemd-logind[797]: New session 18 of user zuul.
Dec 15 10:33:52 compute-0 systemd[1]: Started Session 18 of User zuul.
Dec 15 10:33:52 compute-0 sshd-session[71264]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 15 10:33:52 compute-0 sudo[71340]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psgzrjwhapovjuuctuszcqqymdftspjg ; /usr/bin/python3'
Dec 15 10:33:52 compute-0 sudo[71340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:53 compute-0 useradd[71344]: new group: name=ceph-admin, GID=42478
Dec 15 10:33:53 compute-0 useradd[71344]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Dec 15 10:33:53 compute-0 sudo[71340]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:53 compute-0 sudo[71426]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dixjxgcrbagkidgfevfnotowsezeohlk ; /usr/bin/python3'
Dec 15 10:33:53 compute-0 sudo[71426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:53 compute-0 sudo[71426]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:53 compute-0 sudo[71499]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpbdicmbgfcpenudpiftejweqdbuemyz ; /usr/bin/python3'
Dec 15 10:33:53 compute-0 sudo[71499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:54 compute-0 sudo[71499]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:54 compute-0 sudo[71549]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yurrybromyaxdcamloytlybiprrcgfhd ; /usr/bin/python3'
Dec 15 10:33:54 compute-0 sudo[71549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:54 compute-0 sudo[71549]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:54 compute-0 sudo[71575]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bthcnjzawiakfeyppauhzbenvliocrpo ; /usr/bin/python3'
Dec 15 10:33:54 compute-0 sudo[71575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:54 compute-0 sudo[71575]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:55 compute-0 sudo[71601]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqhzrywconjusyrwbdevhvahuhslmmyu ; /usr/bin/python3'
Dec 15 10:33:55 compute-0 sudo[71601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:55 compute-0 sudo[71601]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:55 compute-0 sudo[71627]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tluyulscxggzzvrvsgpgqynqkeoosctx ; /usr/bin/python3'
Dec 15 10:33:55 compute-0 sudo[71627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:55 compute-0 sudo[71627]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:56 compute-0 sudo[71705]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbeunwpackmicpxzprtkacfmslpopbdj ; /usr/bin/python3'
Dec 15 10:33:56 compute-0 sudo[71705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:56 compute-0 sudo[71705]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:56 compute-0 sudo[71778]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljigbenhmbwbveprajewwayucsmyytcy ; /usr/bin/python3'
Dec 15 10:33:56 compute-0 sudo[71778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:56 compute-0 sudo[71778]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:56 compute-0 sudo[71880]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouanrymfxdzmbpaedbymbpeiwjkhmeqr ; /usr/bin/python3'
Dec 15 10:33:56 compute-0 sudo[71880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:57 compute-0 sudo[71880]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:57 compute-0 sudo[71953]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icjedroozrwctttpxalrmyqyrmgxyobd ; /usr/bin/python3'
Dec 15 10:33:57 compute-0 sudo[71953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:57 compute-0 sudo[71953]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:58 compute-0 sudo[72003]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfpqulcvouaulvvtvrrxylvriqldqimm ; /usr/bin/python3'
Dec 15 10:33:58 compute-0 sudo[72003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:58 compute-0 python3[72005]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 15 10:33:59 compute-0 sudo[72003]: pam_unix(sudo:session): session closed for user root
Dec 15 10:33:59 compute-0 sudo[72098]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaokseossquogjccaqxeeijqyozfuche ; /usr/bin/python3'
Dec 15 10:33:59 compute-0 sudo[72098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:33:59 compute-0 python3[72100]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 15 10:34:01 compute-0 sudo[72098]: pam_unix(sudo:session): session closed for user root
Dec 15 10:34:01 compute-0 sudo[72125]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqgqvslbfzhuehyjahgbgnwbwmeromys ; /usr/bin/python3'
Dec 15 10:34:01 compute-0 sudo[72125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:34:01 compute-0 python3[72127]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 15 10:34:01 compute-0 sudo[72125]: pam_unix(sudo:session): session closed for user root
Dec 15 10:34:02 compute-0 sudo[72151]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuzfxdenbkgsohaobpgzhmahmtnmwfij ; /usr/bin/python3'
Dec 15 10:34:02 compute-0 sudo[72151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:34:02 compute-0 python3[72153]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:34:02 compute-0 kernel: loop: module loaded
Dec 15 10:34:02 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Dec 15 10:34:02 compute-0 sudo[72151]: pam_unix(sudo:session): session closed for user root
Dec 15 10:34:03 compute-0 chronyd[58479]: Selected source 216.232.132.102 (pool.ntp.org)
Dec 15 10:34:04 compute-0 sudo[72187]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eikxbepgtwjbxeofhzgnxgtvjpraceiy ; /usr/bin/python3'
Dec 15 10:34:04 compute-0 sudo[72187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:34:04 compute-0 python3[72189]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:34:04 compute-0 lvm[72192]: PV /dev/loop3 not used.
Dec 15 10:34:04 compute-0 lvm[72194]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 15 10:34:04 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec 15 10:34:04 compute-0 lvm[72204]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 15 10:34:04 compute-0 lvm[72204]: VG ceph_vg0 finished
Dec 15 10:34:04 compute-0 lvm[72202]:   1 logical volume(s) in volume group "ceph_vg0" now active
Dec 15 10:34:04 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec 15 10:34:04 compute-0 sudo[72187]: pam_unix(sudo:session): session closed for user root
Dec 15 10:34:05 compute-0 sudo[72280]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnekzvwafdgrhqbmrizhbholitodlyeb ; /usr/bin/python3'
Dec 15 10:34:05 compute-0 sudo[72280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:34:05 compute-0 python3[72282]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 15 10:34:05 compute-0 sudo[72280]: pam_unix(sudo:session): session closed for user root
Dec 15 10:34:05 compute-0 sudo[72353]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmaujqfdfjasfoplyzkyjpunghmvfdea ; /usr/bin/python3'
Dec 15 10:34:05 compute-0 sudo[72353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:34:05 compute-0 python3[72355]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765794844.8858213-36881-71374546055093/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:34:05 compute-0 sudo[72353]: pam_unix(sudo:session): session closed for user root
Dec 15 10:34:06 compute-0 sudo[72403]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqzutqvkoihhwrqgafshrvksvdoksfje ; /usr/bin/python3'
Dec 15 10:34:06 compute-0 sudo[72403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:34:06 compute-0 python3[72405]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 15 10:34:06 compute-0 systemd[1]: Reloading.
Dec 15 10:34:06 compute-0 systemd-rc-local-generator[72433]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:34:06 compute-0 systemd-sysv-generator[72436]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:34:06 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec 15 10:34:06 compute-0 bash[72444]: /dev/loop3: [64513]:4327948 (/var/lib/ceph-osd-0.img)
Dec 15 10:34:06 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec 15 10:34:06 compute-0 sudo[72403]: pam_unix(sudo:session): session closed for user root
Dec 15 10:34:06 compute-0 lvm[72446]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 15 10:34:06 compute-0 lvm[72446]: VG ceph_vg0 finished
Dec 15 10:34:08 compute-0 python3[72470]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 15 10:34:11 compute-0 sudo[72561]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afwiteubznvpecrzbepleobqlujoxypg ; /usr/bin/python3'
Dec 15 10:34:11 compute-0 sudo[72561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:34:11 compute-0 python3[72563]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 15 10:34:13 compute-0 sudo[72561]: pam_unix(sudo:session): session closed for user root
Dec 15 10:34:13 compute-0 sudo[72618]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lleraqsbnwordnylijtiadhdriqwirot ; /usr/bin/python3'
Dec 15 10:34:13 compute-0 sudo[72618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:34:13 compute-0 python3[72620]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 15 10:34:16 compute-0 groupadd[72630]: group added to /etc/group: name=cephadm, GID=992
Dec 15 10:34:16 compute-0 groupadd[72630]: group added to /etc/gshadow: name=cephadm
Dec 15 10:34:16 compute-0 groupadd[72630]: new group: name=cephadm, GID=992
Dec 15 10:34:16 compute-0 useradd[72637]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Dec 15 10:34:16 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 15 10:34:16 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 15 10:34:17 compute-0 sudo[72618]: pam_unix(sudo:session): session closed for user root
Dec 15 10:34:17 compute-0 sudo[72732]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdregrtnwvvkcuravfolzcrlhcywuvvf ; /usr/bin/python3'
Dec 15 10:34:17 compute-0 sudo[72732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:34:17 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 15 10:34:17 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 15 10:34:17 compute-0 systemd[1]: run-r06c13da0fb054ec5b973fef20c7f3456.service: Deactivated successfully.
Dec 15 10:34:17 compute-0 python3[72734]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 15 10:34:17 compute-0 sudo[72732]: pam_unix(sudo:session): session closed for user root
Dec 15 10:34:17 compute-0 sudo[72761]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dikbfordocgkfpqqcnspovogrdpglpxc ; /usr/bin/python3'
Dec 15 10:34:17 compute-0 sudo[72761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:34:17 compute-0 python3[72763]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:34:18 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 15 10:34:18 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 15 10:34:18 compute-0 sudo[72761]: pam_unix(sudo:session): session closed for user root
Dec 15 10:34:18 compute-0 sudo[72825]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjuyixhiizyhyvuzxoqwduvyejebabxf ; /usr/bin/python3'
Dec 15 10:34:18 compute-0 sudo[72825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:34:18 compute-0 python3[72827]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:34:18 compute-0 sudo[72825]: pam_unix(sudo:session): session closed for user root
Dec 15 10:34:18 compute-0 sudo[72851]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjvyeofzkwypuugolvbdsgqwwgpdewvr ; /usr/bin/python3'
Dec 15 10:34:18 compute-0 sudo[72851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:34:19 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 15 10:34:19 compute-0 python3[72853]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:34:19 compute-0 sudo[72851]: pam_unix(sudo:session): session closed for user root
Dec 15 10:34:19 compute-0 sudo[72929]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-navtfiyehhqnbzeqzghysikjfektoosf ; /usr/bin/python3'
Dec 15 10:34:19 compute-0 sudo[72929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:34:19 compute-0 python3[72931]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 15 10:34:19 compute-0 sudo[72929]: pam_unix(sudo:session): session closed for user root
Dec 15 10:34:19 compute-0 sudo[73002]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itfhsluivtdyeihwqiqysxbohvieegvp ; /usr/bin/python3'
Dec 15 10:34:19 compute-0 sudo[73002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:34:20 compute-0 python3[73004]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765794859.4291914-37073-85881642559850/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:34:20 compute-0 sudo[73002]: pam_unix(sudo:session): session closed for user root
Dec 15 10:34:20 compute-0 sudo[73104]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taxedbykqqxgycnwomxaluylexolqanj ; /usr/bin/python3'
Dec 15 10:34:20 compute-0 sudo[73104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:34:20 compute-0 python3[73106]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 15 10:34:20 compute-0 sudo[73104]: pam_unix(sudo:session): session closed for user root
Dec 15 10:34:20 compute-0 sudo[73177]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfywjddrqbhszysolrnjdukblyjiskom ; /usr/bin/python3'
Dec 15 10:34:20 compute-0 sudo[73177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:34:21 compute-0 python3[73179]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765794860.5218787-37091-201433648733110/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:34:21 compute-0 sudo[73177]: pam_unix(sudo:session): session closed for user root
Dec 15 10:34:21 compute-0 sudo[73227]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfjktoixlwacrooezpdnfgndctdhvxrf ; /usr/bin/python3'
Dec 15 10:34:21 compute-0 sudo[73227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:34:21 compute-0 python3[73229]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 15 10:34:21 compute-0 sudo[73227]: pam_unix(sudo:session): session closed for user root
Dec 15 10:34:21 compute-0 sudo[73255]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzskirmsqaoivonrdrzsxikpiallmgdo ; /usr/bin/python3'
Dec 15 10:34:21 compute-0 sudo[73255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:34:21 compute-0 python3[73257]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 15 10:34:21 compute-0 sudo[73255]: pam_unix(sudo:session): session closed for user root
Dec 15 10:34:21 compute-0 sudo[73283]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clhzxgujfwmurgqhjganbhtzoknesgpt ; /usr/bin/python3'
Dec 15 10:34:21 compute-0 sudo[73283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:34:22 compute-0 python3[73285]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 15 10:34:22 compute-0 sudo[73283]: pam_unix(sudo:session): session closed for user root
Dec 15 10:34:22 compute-0 sudo[73311]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vblsiauluobbanxuvldiirygxeooxcxs ; /usr/bin/python3'
Dec 15 10:34:22 compute-0 sudo[73311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:34:22 compute-0 python3[73313]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 77365f67-614e-5a8d-b658-640395550c79 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:34:22 compute-0 sshd-session[73317]: Accepted publickey for ceph-admin from 192.168.122.100 port 57980 ssh2: RSA SHA256:3azC1xJ0J6DfmukNQYX52hxlEZBJA1o57dtmni4O/KI
Dec 15 10:34:22 compute-0 systemd-logind[797]: New session 19 of user ceph-admin.
Dec 15 10:34:22 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 15 10:34:22 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 15 10:34:22 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 15 10:34:22 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 15 10:34:22 compute-0 systemd[73321]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 15 10:34:22 compute-0 systemd[73321]: Queued start job for default target Main User Target.
Dec 15 10:34:22 compute-0 systemd[73321]: Created slice User Application Slice.
Dec 15 10:34:22 compute-0 systemd[73321]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 15 10:34:22 compute-0 systemd[73321]: Started Daily Cleanup of User's Temporary Directories.
Dec 15 10:34:22 compute-0 systemd[73321]: Reached target Paths.
Dec 15 10:34:22 compute-0 systemd[73321]: Reached target Timers.
Dec 15 10:34:22 compute-0 systemd[73321]: Starting D-Bus User Message Bus Socket...
Dec 15 10:34:22 compute-0 systemd[73321]: Starting Create User's Volatile Files and Directories...
Dec 15 10:34:22 compute-0 systemd[73321]: Listening on D-Bus User Message Bus Socket.
Dec 15 10:34:22 compute-0 systemd[73321]: Reached target Sockets.
Dec 15 10:34:22 compute-0 systemd[73321]: Finished Create User's Volatile Files and Directories.
Dec 15 10:34:22 compute-0 systemd[73321]: Reached target Basic System.
Dec 15 10:34:22 compute-0 systemd[73321]: Reached target Main User Target.
Dec 15 10:34:22 compute-0 systemd[73321]: Startup finished in 103ms.
Dec 15 10:34:22 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 15 10:34:22 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Dec 15 10:34:22 compute-0 sshd-session[73317]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 15 10:34:23 compute-0 sudo[73337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Dec 15 10:34:23 compute-0 sudo[73337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:34:23 compute-0 sudo[73337]: pam_unix(sudo:session): session closed for user root
Dec 15 10:34:23 compute-0 sshd-session[73336]: Received disconnect from 192.168.122.100 port 57980:11: disconnected by user
Dec 15 10:34:23 compute-0 sshd-session[73336]: Disconnected from user ceph-admin 192.168.122.100 port 57980
Dec 15 10:34:23 compute-0 sshd-session[73317]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 15 10:34:23 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Dec 15 10:34:23 compute-0 systemd-logind[797]: Session 19 logged out. Waiting for processes to exit.
Dec 15 10:34:23 compute-0 systemd-logind[797]: Removed session 19.
Dec 15 10:34:23 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 15 10:34:23 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 15 10:34:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3536642459-lower\x2dmapped.mount: Deactivated successfully.
Dec 15 10:34:33 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Dec 15 10:34:33 compute-0 systemd[73321]: Activating special unit Exit the Session...
Dec 15 10:34:33 compute-0 systemd[73321]: Stopped target Main User Target.
Dec 15 10:34:33 compute-0 systemd[73321]: Stopped target Basic System.
Dec 15 10:34:33 compute-0 systemd[73321]: Stopped target Paths.
Dec 15 10:34:33 compute-0 systemd[73321]: Stopped target Sockets.
Dec 15 10:34:33 compute-0 systemd[73321]: Stopped target Timers.
Dec 15 10:34:33 compute-0 systemd[73321]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 15 10:34:33 compute-0 systemd[73321]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 15 10:34:33 compute-0 systemd[73321]: Closed D-Bus User Message Bus Socket.
Dec 15 10:34:33 compute-0 systemd[73321]: Stopped Create User's Volatile Files and Directories.
Dec 15 10:34:33 compute-0 systemd[73321]: Removed slice User Application Slice.
Dec 15 10:34:33 compute-0 systemd[73321]: Reached target Shutdown.
Dec 15 10:34:33 compute-0 systemd[73321]: Finished Exit the Session.
Dec 15 10:34:33 compute-0 systemd[73321]: Reached target Exit the Session.
Dec 15 10:34:33 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Dec 15 10:34:33 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Dec 15 10:34:33 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec 15 10:34:33 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec 15 10:34:33 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec 15 10:34:33 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec 15 10:34:33 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Dec 15 10:34:41 compute-0 podman[73414]: 2025-12-15 10:34:41.488816805 +0000 UTC m=+18.164667618 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:34:41 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 15 10:34:41 compute-0 podman[73476]: 2025-12-15 10:34:41.571097938 +0000 UTC m=+0.050701111 container create becc6ea97b896f5c58cc8813a1358a4201408fe52acd566b93208d3482a8b989 (image=quay.io/ceph/ceph:v19, name=inspiring_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:34:41 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec 15 10:34:41 compute-0 systemd[1]: Started libpod-conmon-becc6ea97b896f5c58cc8813a1358a4201408fe52acd566b93208d3482a8b989.scope.
Dec 15 10:34:41 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:34:41 compute-0 podman[73476]: 2025-12-15 10:34:41.548708827 +0000 UTC m=+0.028312030 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:34:41 compute-0 podman[73476]: 2025-12-15 10:34:41.674557114 +0000 UTC m=+0.154160337 container init becc6ea97b896f5c58cc8813a1358a4201408fe52acd566b93208d3482a8b989 (image=quay.io/ceph/ceph:v19, name=inspiring_rhodes, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 15 10:34:41 compute-0 podman[73476]: 2025-12-15 10:34:41.681416452 +0000 UTC m=+0.161019625 container start becc6ea97b896f5c58cc8813a1358a4201408fe52acd566b93208d3482a8b989 (image=quay.io/ceph/ceph:v19, name=inspiring_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 15 10:34:41 compute-0 podman[73476]: 2025-12-15 10:34:41.684531071 +0000 UTC m=+0.164134274 container attach becc6ea97b896f5c58cc8813a1358a4201408fe52acd566b93208d3482a8b989 (image=quay.io/ceph/ceph:v19, name=inspiring_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 15 10:34:41 compute-0 inspiring_rhodes[73492]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Dec 15 10:34:41 compute-0 systemd[1]: libpod-becc6ea97b896f5c58cc8813a1358a4201408fe52acd566b93208d3482a8b989.scope: Deactivated successfully.
Dec 15 10:34:41 compute-0 podman[73476]: 2025-12-15 10:34:41.787912824 +0000 UTC m=+0.267515997 container died becc6ea97b896f5c58cc8813a1358a4201408fe52acd566b93208d3482a8b989 (image=quay.io/ceph/ceph:v19, name=inspiring_rhodes, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:34:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b7190499516fed006b875ac38036cfeae40b1b09f80895a908ae2441f3e7ef0-merged.mount: Deactivated successfully.
Dec 15 10:34:41 compute-0 podman[73476]: 2025-12-15 10:34:41.820063575 +0000 UTC m=+0.299666758 container remove becc6ea97b896f5c58cc8813a1358a4201408fe52acd566b93208d3482a8b989 (image=quay.io/ceph/ceph:v19, name=inspiring_rhodes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:34:41 compute-0 systemd[1]: libpod-conmon-becc6ea97b896f5c58cc8813a1358a4201408fe52acd566b93208d3482a8b989.scope: Deactivated successfully.
Dec 15 10:34:41 compute-0 podman[73508]: 2025-12-15 10:34:41.877107677 +0000 UTC m=+0.038181723 container create 75ebc05899ee60186e073fc5ee755cf522da329771bd9e8cad82652605ac4f32 (image=quay.io/ceph/ceph:v19, name=friendly_tu, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 15 10:34:41 compute-0 systemd[1]: Started libpod-conmon-75ebc05899ee60186e073fc5ee755cf522da329771bd9e8cad82652605ac4f32.scope.
Dec 15 10:34:41 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:34:41 compute-0 podman[73508]: 2025-12-15 10:34:41.932045261 +0000 UTC m=+0.093119367 container init 75ebc05899ee60186e073fc5ee755cf522da329771bd9e8cad82652605ac4f32 (image=quay.io/ceph/ceph:v19, name=friendly_tu, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:34:41 compute-0 podman[73508]: 2025-12-15 10:34:41.936941128 +0000 UTC m=+0.098015174 container start 75ebc05899ee60186e073fc5ee755cf522da329771bd9e8cad82652605ac4f32 (image=quay.io/ceph/ceph:v19, name=friendly_tu, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 15 10:34:41 compute-0 friendly_tu[73526]: 167 167
Dec 15 10:34:41 compute-0 systemd[1]: libpod-75ebc05899ee60186e073fc5ee755cf522da329771bd9e8cad82652605ac4f32.scope: Deactivated successfully.
Dec 15 10:34:41 compute-0 podman[73508]: 2025-12-15 10:34:41.944441175 +0000 UTC m=+0.105515321 container attach 75ebc05899ee60186e073fc5ee755cf522da329771bd9e8cad82652605ac4f32 (image=quay.io/ceph/ceph:v19, name=friendly_tu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 15 10:34:41 compute-0 podman[73508]: 2025-12-15 10:34:41.945132508 +0000 UTC m=+0.106206554 container died 75ebc05899ee60186e073fc5ee755cf522da329771bd9e8cad82652605ac4f32 (image=quay.io/ceph/ceph:v19, name=friendly_tu, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 15 10:34:41 compute-0 podman[73508]: 2025-12-15 10:34:41.861855302 +0000 UTC m=+0.022929378 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:34:41 compute-0 podman[73508]: 2025-12-15 10:34:41.995514688 +0000 UTC m=+0.156588724 container remove 75ebc05899ee60186e073fc5ee755cf522da329771bd9e8cad82652605ac4f32 (image=quay.io/ceph/ceph:v19, name=friendly_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 15 10:34:42 compute-0 systemd[1]: libpod-conmon-75ebc05899ee60186e073fc5ee755cf522da329771bd9e8cad82652605ac4f32.scope: Deactivated successfully.
Dec 15 10:34:42 compute-0 podman[73544]: 2025-12-15 10:34:42.058081454 +0000 UTC m=+0.042366567 container create cb98ed62031685d58cdf99dbc0bd6ed35689ca70ef3d7c3eb16ff31f41ef7ab5 (image=quay.io/ceph/ceph:v19, name=amazing_liskov, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:34:42 compute-0 systemd[1]: Started libpod-conmon-cb98ed62031685d58cdf99dbc0bd6ed35689ca70ef3d7c3eb16ff31f41ef7ab5.scope.
Dec 15 10:34:42 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:34:42 compute-0 podman[73544]: 2025-12-15 10:34:42.114803896 +0000 UTC m=+0.099089039 container init cb98ed62031685d58cdf99dbc0bd6ed35689ca70ef3d7c3eb16ff31f41ef7ab5 (image=quay.io/ceph/ceph:v19, name=amazing_liskov, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 15 10:34:42 compute-0 podman[73544]: 2025-12-15 10:34:42.119272647 +0000 UTC m=+0.103557760 container start cb98ed62031685d58cdf99dbc0bd6ed35689ca70ef3d7c3eb16ff31f41ef7ab5 (image=quay.io/ceph/ceph:v19, name=amazing_liskov, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 15 10:34:42 compute-0 podman[73544]: 2025-12-15 10:34:42.1240694 +0000 UTC m=+0.108354533 container attach cb98ed62031685d58cdf99dbc0bd6ed35689ca70ef3d7c3eb16ff31f41ef7ab5 (image=quay.io/ceph/ceph:v19, name=amazing_liskov, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:34:42 compute-0 podman[73544]: 2025-12-15 10:34:42.037505051 +0000 UTC m=+0.021790184 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:34:42 compute-0 amazing_liskov[73560]: AQBC5D9pRcg1CBAAR4+A8NA9FwrsoO8UCbbbwQ==
Dec 15 10:34:42 compute-0 systemd[1]: libpod-cb98ed62031685d58cdf99dbc0bd6ed35689ca70ef3d7c3eb16ff31f41ef7ab5.scope: Deactivated successfully.
Dec 15 10:34:42 compute-0 podman[73544]: 2025-12-15 10:34:42.140918286 +0000 UTC m=+0.125203399 container died cb98ed62031685d58cdf99dbc0bd6ed35689ca70ef3d7c3eb16ff31f41ef7ab5 (image=quay.io/ceph/ceph:v19, name=amazing_liskov, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:34:42 compute-0 podman[73544]: 2025-12-15 10:34:42.174555494 +0000 UTC m=+0.158840607 container remove cb98ed62031685d58cdf99dbc0bd6ed35689ca70ef3d7c3eb16ff31f41ef7ab5 (image=quay.io/ceph/ceph:v19, name=amazing_liskov, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Dec 15 10:34:42 compute-0 systemd[1]: libpod-conmon-cb98ed62031685d58cdf99dbc0bd6ed35689ca70ef3d7c3eb16ff31f41ef7ab5.scope: Deactivated successfully.
Dec 15 10:34:42 compute-0 podman[73579]: 2025-12-15 10:34:42.231624506 +0000 UTC m=+0.039096693 container create 0caf34ed3c7ff0ce01a47cc549d1b504eba9d7075cd6f0378154f2aba32c65a5 (image=quay.io/ceph/ceph:v19, name=vigorous_wu, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:34:42 compute-0 systemd[1]: Started libpod-conmon-0caf34ed3c7ff0ce01a47cc549d1b504eba9d7075cd6f0378154f2aba32c65a5.scope.
Dec 15 10:34:42 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:34:42 compute-0 podman[73579]: 2025-12-15 10:34:42.295772103 +0000 UTC m=+0.103244310 container init 0caf34ed3c7ff0ce01a47cc549d1b504eba9d7075cd6f0378154f2aba32c65a5 (image=quay.io/ceph/ceph:v19, name=vigorous_wu, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 15 10:34:42 compute-0 podman[73579]: 2025-12-15 10:34:42.300416691 +0000 UTC m=+0.107888878 container start 0caf34ed3c7ff0ce01a47cc549d1b504eba9d7075cd6f0378154f2aba32c65a5 (image=quay.io/ceph/ceph:v19, name=vigorous_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 15 10:34:42 compute-0 podman[73579]: 2025-12-15 10:34:42.303877481 +0000 UTC m=+0.111349688 container attach 0caf34ed3c7ff0ce01a47cc549d1b504eba9d7075cd6f0378154f2aba32c65a5 (image=quay.io/ceph/ceph:v19, name=vigorous_wu, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 15 10:34:42 compute-0 podman[73579]: 2025-12-15 10:34:42.214735949 +0000 UTC m=+0.022208156 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:34:42 compute-0 vigorous_wu[73597]: AQBC5D9pCpcJExAA9umyDcjYyeT1CLEhbK3+UQ==
Dec 15 10:34:42 compute-0 systemd[1]: libpod-0caf34ed3c7ff0ce01a47cc549d1b504eba9d7075cd6f0378154f2aba32c65a5.scope: Deactivated successfully.
Dec 15 10:34:42 compute-0 podman[73579]: 2025-12-15 10:34:42.32304344 +0000 UTC m=+0.130515647 container died 0caf34ed3c7ff0ce01a47cc549d1b504eba9d7075cd6f0378154f2aba32c65a5 (image=quay.io/ceph/ceph:v19, name=vigorous_wu, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 15 10:34:42 compute-0 podman[73579]: 2025-12-15 10:34:42.359286611 +0000 UTC m=+0.166758798 container remove 0caf34ed3c7ff0ce01a47cc549d1b504eba9d7075cd6f0378154f2aba32c65a5 (image=quay.io/ceph/ceph:v19, name=vigorous_wu, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 15 10:34:42 compute-0 systemd[1]: libpod-conmon-0caf34ed3c7ff0ce01a47cc549d1b504eba9d7075cd6f0378154f2aba32c65a5.scope: Deactivated successfully.
Dec 15 10:34:42 compute-0 podman[73615]: 2025-12-15 10:34:42.424158771 +0000 UTC m=+0.042137939 container create 705fe0f50988d3ad10c867d1e4f89a2aa7854129f558a2a7a80414cf25de9fb7 (image=quay.io/ceph/ceph:v19, name=agitated_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:34:42 compute-0 systemd[1]: Started libpod-conmon-705fe0f50988d3ad10c867d1e4f89a2aa7854129f558a2a7a80414cf25de9fb7.scope.
Dec 15 10:34:42 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:34:42 compute-0 podman[73615]: 2025-12-15 10:34:42.406389737 +0000 UTC m=+0.024368935 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:34:42 compute-0 podman[73615]: 2025-12-15 10:34:42.873692868 +0000 UTC m=+0.491672046 container init 705fe0f50988d3ad10c867d1e4f89a2aa7854129f558a2a7a80414cf25de9fb7 (image=quay.io/ceph/ceph:v19, name=agitated_mayer, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 15 10:34:42 compute-0 podman[73615]: 2025-12-15 10:34:42.879369608 +0000 UTC m=+0.497348776 container start 705fe0f50988d3ad10c867d1e4f89a2aa7854129f558a2a7a80414cf25de9fb7 (image=quay.io/ceph/ceph:v19, name=agitated_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 15 10:34:42 compute-0 agitated_mayer[73631]: AQBC5D9peueTNRAAkvX++itgWbovV5l/1P4WhQ==
Dec 15 10:34:42 compute-0 systemd[1]: libpod-705fe0f50988d3ad10c867d1e4f89a2aa7854129f558a2a7a80414cf25de9fb7.scope: Deactivated successfully.
Dec 15 10:34:44 compute-0 podman[73615]: 2025-12-15 10:34:44.699236675 +0000 UTC m=+2.317215893 container attach 705fe0f50988d3ad10c867d1e4f89a2aa7854129f558a2a7a80414cf25de9fb7 (image=quay.io/ceph/ceph:v19, name=agitated_mayer, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:34:44 compute-0 podman[73615]: 2025-12-15 10:34:44.700105662 +0000 UTC m=+2.318084830 container died 705fe0f50988d3ad10c867d1e4f89a2aa7854129f558a2a7a80414cf25de9fb7 (image=quay.io/ceph/ceph:v19, name=agitated_mayer, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:34:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc158090e47df7a311c5c3fe5edf9dbe5fe24d6cf39d304f0173bdd0ee751eb9-merged.mount: Deactivated successfully.
Dec 15 10:34:44 compute-0 podman[73615]: 2025-12-15 10:34:44.973867079 +0000 UTC m=+2.591846247 container remove 705fe0f50988d3ad10c867d1e4f89a2aa7854129f558a2a7a80414cf25de9fb7 (image=quay.io/ceph/ceph:v19, name=agitated_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 15 10:34:44 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 15 10:34:45 compute-0 podman[73652]: 2025-12-15 10:34:45.014705385 +0000 UTC m=+0.021586526 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:34:45 compute-0 podman[73652]: 2025-12-15 10:34:45.323827163 +0000 UTC m=+0.330708304 container create 9a7e7a62656d57659e1c6a34b9b5e27f6ac30d9314877c83ad540752e7f2bbcb (image=quay.io/ceph/ceph:v19, name=festive_goldwasser, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 15 10:34:45 compute-0 systemd[1]: Started libpod-conmon-9a7e7a62656d57659e1c6a34b9b5e27f6ac30d9314877c83ad540752e7f2bbcb.scope.
Dec 15 10:34:45 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0c327e55ed533c99ce2518c9051af103333343cf4d35ef1e72630b349b2f310/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:45 compute-0 podman[73652]: 2025-12-15 10:34:45.439258539 +0000 UTC m=+0.446139710 container init 9a7e7a62656d57659e1c6a34b9b5e27f6ac30d9314877c83ad540752e7f2bbcb (image=quay.io/ceph/ceph:v19, name=festive_goldwasser, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 15 10:34:45 compute-0 podman[73652]: 2025-12-15 10:34:45.444309169 +0000 UTC m=+0.451190310 container start 9a7e7a62656d57659e1c6a34b9b5e27f6ac30d9314877c83ad540752e7f2bbcb (image=quay.io/ceph/ceph:v19, name=festive_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:34:45 compute-0 festive_goldwasser[73668]: /usr/bin/monmaptool: monmap file /tmp/monmap
Dec 15 10:34:45 compute-0 festive_goldwasser[73668]: setting min_mon_release = quincy
Dec 15 10:34:45 compute-0 festive_goldwasser[73668]: /usr/bin/monmaptool: set fsid to 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:34:45 compute-0 festive_goldwasser[73668]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Dec 15 10:34:45 compute-0 systemd[1]: libpod-9a7e7a62656d57659e1c6a34b9b5e27f6ac30d9314877c83ad540752e7f2bbcb.scope: Deactivated successfully.
Dec 15 10:34:45 compute-0 podman[73652]: 2025-12-15 10:34:45.512247006 +0000 UTC m=+0.519128197 container attach 9a7e7a62656d57659e1c6a34b9b5e27f6ac30d9314877c83ad540752e7f2bbcb (image=quay.io/ceph/ceph:v19, name=festive_goldwasser, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:34:45 compute-0 podman[73652]: 2025-12-15 10:34:45.513057372 +0000 UTC m=+0.519938533 container died 9a7e7a62656d57659e1c6a34b9b5e27f6ac30d9314877c83ad540752e7f2bbcb (image=quay.io/ceph/ceph:v19, name=festive_goldwasser, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 15 10:34:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0c327e55ed533c99ce2518c9051af103333343cf4d35ef1e72630b349b2f310-merged.mount: Deactivated successfully.
Dec 15 10:34:45 compute-0 podman[73652]: 2025-12-15 10:34:45.627291971 +0000 UTC m=+0.634173112 container remove 9a7e7a62656d57659e1c6a34b9b5e27f6ac30d9314877c83ad540752e7f2bbcb (image=quay.io/ceph/ceph:v19, name=festive_goldwasser, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:34:45 compute-0 systemd[1]: libpod-conmon-9a7e7a62656d57659e1c6a34b9b5e27f6ac30d9314877c83ad540752e7f2bbcb.scope: Deactivated successfully.
Dec 15 10:34:45 compute-0 systemd[1]: libpod-conmon-705fe0f50988d3ad10c867d1e4f89a2aa7854129f558a2a7a80414cf25de9fb7.scope: Deactivated successfully.
Dec 15 10:34:45 compute-0 podman[73686]: 2025-12-15 10:34:45.67073693 +0000 UTC m=+0.022873487 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:34:45 compute-0 podman[73686]: 2025-12-15 10:34:45.995849926 +0000 UTC m=+0.347986463 container create 32997a94976c5edf276de164c0a56e43bafdf25eb2219696cd00691f18e4e9ce (image=quay.io/ceph/ceph:v19, name=dreamy_herschel, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 15 10:34:46 compute-0 systemd[1]: Started libpod-conmon-32997a94976c5edf276de164c0a56e43bafdf25eb2219696cd00691f18e4e9ce.scope.
Dec 15 10:34:46 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:34:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e9559d4474a98fc54abc0e919f544ae2ccd5c6bebd2cda7a2e0c44b56e52b1/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e9559d4474a98fc54abc0e919f544ae2ccd5c6bebd2cda7a2e0c44b56e52b1/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e9559d4474a98fc54abc0e919f544ae2ccd5c6bebd2cda7a2e0c44b56e52b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e9559d4474a98fc54abc0e919f544ae2ccd5c6bebd2cda7a2e0c44b56e52b1/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:46 compute-0 podman[73686]: 2025-12-15 10:34:46.202317873 +0000 UTC m=+0.554454430 container init 32997a94976c5edf276de164c0a56e43bafdf25eb2219696cd00691f18e4e9ce (image=quay.io/ceph/ceph:v19, name=dreamy_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 15 10:34:46 compute-0 podman[73686]: 2025-12-15 10:34:46.207105174 +0000 UTC m=+0.559241711 container start 32997a94976c5edf276de164c0a56e43bafdf25eb2219696cd00691f18e4e9ce (image=quay.io/ceph/ceph:v19, name=dreamy_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:34:46 compute-0 podman[73686]: 2025-12-15 10:34:46.291891477 +0000 UTC m=+0.644028114 container attach 32997a94976c5edf276de164c0a56e43bafdf25eb2219696cd00691f18e4e9ce (image=quay.io/ceph/ceph:v19, name=dreamy_herschel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Dec 15 10:34:46 compute-0 systemd[1]: libpod-32997a94976c5edf276de164c0a56e43bafdf25eb2219696cd00691f18e4e9ce.scope: Deactivated successfully.
Dec 15 10:34:46 compute-0 podman[73730]: 2025-12-15 10:34:46.919234882 +0000 UTC m=+0.021430182 container died 32997a94976c5edf276de164c0a56e43bafdf25eb2219696cd00691f18e4e9ce (image=quay.io/ceph/ceph:v19, name=dreamy_herschel, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec 15 10:34:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-67e9559d4474a98fc54abc0e919f544ae2ccd5c6bebd2cda7a2e0c44b56e52b1-merged.mount: Deactivated successfully.
Dec 15 10:34:47 compute-0 podman[73730]: 2025-12-15 10:34:47.321083423 +0000 UTC m=+0.423278703 container remove 32997a94976c5edf276de164c0a56e43bafdf25eb2219696cd00691f18e4e9ce (image=quay.io/ceph/ceph:v19, name=dreamy_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:34:47 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 15 10:34:47 compute-0 systemd[1]: libpod-conmon-32997a94976c5edf276de164c0a56e43bafdf25eb2219696cd00691f18e4e9ce.scope: Deactivated successfully.
Dec 15 10:34:47 compute-0 systemd[1]: Reloading.
Dec 15 10:34:47 compute-0 systemd-sysv-generator[73777]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:34:47 compute-0 systemd-rc-local-generator[73774]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:34:47 compute-0 systemd[1]: Reloading.
Dec 15 10:34:47 compute-0 systemd-rc-local-generator[73811]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:34:47 compute-0 systemd-sysv-generator[73814]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:34:47 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Dec 15 10:34:47 compute-0 systemd[1]: Reloading.
Dec 15 10:34:47 compute-0 systemd-sysv-generator[73851]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:34:47 compute-0 systemd-rc-local-generator[73848]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:34:48 compute-0 systemd[1]: Reached target Ceph cluster 77365f67-614e-5a8d-b658-640395550c79.
Dec 15 10:34:48 compute-0 systemd[1]: Reloading.
Dec 15 10:34:48 compute-0 systemd-rc-local-generator[73887]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:34:48 compute-0 systemd-sysv-generator[73891]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:34:48 compute-0 systemd[1]: Reloading.
Dec 15 10:34:48 compute-0 systemd-rc-local-generator[73929]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:34:48 compute-0 systemd-sysv-generator[73932]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:34:48 compute-0 systemd[1]: Created slice Slice /system/ceph-77365f67-614e-5a8d-b658-640395550c79.
Dec 15 10:34:48 compute-0 systemd[1]: Reached target System Time Set.
Dec 15 10:34:48 compute-0 systemd[1]: Reached target System Time Synchronized.
Dec 15 10:34:48 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 77365f67-614e-5a8d-b658-640395550c79...
Dec 15 10:34:48 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 15 10:34:48 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 15 10:34:48 compute-0 podman[73982]: 2025-12-15 10:34:48.899476573 +0000 UTC m=+0.053451269 container create 97d0713e4f1e0bfa7725536e0f6d871684976d8b62cc8ca81e2420deb933c80c (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:34:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d1b6c27223915ba7c7076ccc131940a2ae5972a8acb6e72af919666ff5f8f65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d1b6c27223915ba7c7076ccc131940a2ae5972a8acb6e72af919666ff5f8f65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d1b6c27223915ba7c7076ccc131940a2ae5972a8acb6e72af919666ff5f8f65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d1b6c27223915ba7c7076ccc131940a2ae5972a8acb6e72af919666ff5f8f65/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:48 compute-0 podman[73982]: 2025-12-15 10:34:48.959740286 +0000 UTC m=+0.113715002 container init 97d0713e4f1e0bfa7725536e0f6d871684976d8b62cc8ca81e2420deb933c80c (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:34:48 compute-0 podman[73982]: 2025-12-15 10:34:48.866821785 +0000 UTC m=+0.020796501 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:34:48 compute-0 podman[73982]: 2025-12-15 10:34:48.965134557 +0000 UTC m=+0.119109253 container start 97d0713e4f1e0bfa7725536e0f6d871684976d8b62cc8ca81e2420deb933c80c (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Dec 15 10:34:48 compute-0 bash[73982]: 97d0713e4f1e0bfa7725536e0f6d871684976d8b62cc8ca81e2420deb933c80c
Dec 15 10:34:48 compute-0 systemd[1]: Started Ceph mon.compute-0 for 77365f67-614e-5a8d-b658-640395550c79.
Dec 15 10:34:48 compute-0 ceph-mon[74002]: set uid:gid to 167:167 (ceph:ceph)
Dec 15 10:34:48 compute-0 ceph-mon[74002]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Dec 15 10:34:48 compute-0 ceph-mon[74002]: pidfile_write: ignore empty --pid-file
Dec 15 10:34:49 compute-0 ceph-mon[74002]: load: jerasure load: lrc 
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: RocksDB version: 7.9.2
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: Git sha 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: DB SUMMARY
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: DB Session ID:  OI0NPT0JHX0XCIPK3DBC
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: CURRENT file:  CURRENT
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: IDENTITY file:  IDENTITY
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                         Options.error_if_exists: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                       Options.create_if_missing: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                         Options.paranoid_checks: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                                     Options.env: 0x5559d2bd7c20
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                                Options.info_log: 0x5559d44f6d60
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                Options.max_file_opening_threads: 16
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                              Options.statistics: (nil)
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                               Options.use_fsync: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                       Options.max_log_file_size: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                         Options.allow_fallocate: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                        Options.use_direct_reads: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:          Options.create_missing_column_families: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                              Options.db_log_dir: 
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                                 Options.wal_dir: 
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                   Options.advise_random_on_open: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                    Options.write_buffer_manager: 0x5559d44fb900
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                            Options.rate_limiter: (nil)
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                  Options.unordered_write: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                               Options.row_cache: None
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                              Options.wal_filter: None
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:             Options.allow_ingest_behind: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:             Options.two_write_queues: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:             Options.manual_wal_flush: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:             Options.wal_compression: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:             Options.atomic_flush: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                 Options.log_readahead_size: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:             Options.allow_data_in_errors: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:             Options.db_host_id: __hostname__
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:             Options.max_background_jobs: 2
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:             Options.max_background_compactions: -1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:             Options.max_subcompactions: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:             Options.max_total_wal_size: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                          Options.max_open_files: -1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                          Options.bytes_per_sync: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:       Options.compaction_readahead_size: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                  Options.max_background_flushes: -1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: Compression algorithms supported:
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:         kZSTD supported: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:         kXpressCompression supported: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:         kBZip2Compression supported: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:         kLZ4Compression supported: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:         kZlibCompression supported: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:         kLZ4HCCompression supported: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:         kSnappyCompression supported: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:           Options.merge_operator: 
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:        Options.compaction_filter: None
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:        Options.compaction_filter_factory: None
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:  Options.sst_partitioner_factory: None
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5559d44f6500)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5559d451b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:        Options.write_buffer_size: 33554432
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:  Options.max_write_buffer_number: 2
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:          Options.compression: NoCompression
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:       Options.prefix_extractor: nullptr
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:             Options.num_levels: 7
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                  Options.compression_opts.level: 32767
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:               Options.compression_opts.strategy: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                  Options.compression_opts.enabled: false
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                        Options.arena_block_size: 1048576
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                Options.disable_auto_compactions: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                   Options.inplace_update_support: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                           Options.bloom_locality: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                    Options.max_successive_merges: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                Options.paranoid_file_checks: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                Options.force_consistency_checks: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                Options.report_bg_io_stats: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                               Options.ttl: 2592000
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                       Options.enable_blob_files: false
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                           Options.min_blob_size: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                          Options.blob_file_size: 268435456
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb:                Options.blob_file_starting_level: 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d36e3d93-cef6-4482-9c71-0054ae87e0c9
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765794889004134, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765794889023150, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765794889, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d36e3d93-cef6-4482-9c71-0054ae87e0c9", "db_session_id": "OI0NPT0JHX0XCIPK3DBC", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765794889023312, "job": 1, "event": "recovery_finished"}
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5559d451ce00
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: DB pointer 0x5559d4626000
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 15 10:34:49 compute-0 ceph-mon[74002]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.06 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.06 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5559d451b350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 15 10:34:49 compute-0 ceph-mon[74002]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@-1(???) e0 preinit fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(probing) e0 win_standalone_election
Dec 15 10:34:49 compute-0 ceph-mon[74002]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Dec 15 10:34:49 compute-0 podman[74003]: 2025-12-15 10:34:49.038185508 +0000 UTC m=+0.036300694 container create bfae2a40b0c571923e2c671e0c7436622e9b5446480d0b5138d5458f7005e26a (image=quay.io/ceph/ceph:v19, name=trusting_tharp, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 15 10:34:49 compute-0 ceph-mon[74002]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 15 10:34:49 compute-0 ceph-mon[74002]: paxos.0).electionLogic(2) init, last seen epoch 2
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 15 10:34:49 compute-0 ceph-mon[74002]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 15 10:34:49 compute-0 ceph-mon[74002]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: log_channel(cluster) log [DBG] : fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:34:49 compute-0 ceph-mon[74002]: log_channel(cluster) log [DBG] : last_changed 2025-12-15T10:34:45.470940+0000
Dec 15 10:34:49 compute-0 ceph-mon[74002]: log_channel(cluster) log [DBG] : created 2025-12-15T10:34:45.470940+0000
Dec 15 10:34:49 compute-0 ceph-mon[74002]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec 15 10:34:49 compute-0 ceph-mon[74002]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025,kernel_version=5.14.0-648.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864300,os=Linux}
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Dec 15 10:34:49 compute-0 systemd[1]: Started libpod-conmon-bfae2a40b0c571923e2c671e0c7436622e9b5446480d0b5138d5458f7005e26a.scope.
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).mds e1 new map
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2025-12-15T10:34:49:065673+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 15 10:34:49 compute-0 ceph-mon[74002]: log_channel(cluster) log [DBG] : fsmap 
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mkfs 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Dec 15 10:34:49 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/209e0a94410f064cf8de1e69dfdc9ef37a017c3d1a05c673b5be8c60ad0516ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/209e0a94410f064cf8de1e69dfdc9ef37a017c3d1a05c673b5be8c60ad0516ba/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/209e0a94410f064cf8de1e69dfdc9ef37a017c3d1a05c673b5be8c60ad0516ba/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:49 compute-0 podman[74003]: 2025-12-15 10:34:49.023436629 +0000 UTC m=+0.021551835 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:34:49 compute-0 ceph-mon[74002]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 15 10:34:49 compute-0 ceph-mon[74002]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 15 10:34:49 compute-0 podman[74003]: 2025-12-15 10:34:49.20292125 +0000 UTC m=+0.201036516 container init bfae2a40b0c571923e2c671e0c7436622e9b5446480d0b5138d5458f7005e26a (image=quay.io/ceph/ceph:v19, name=trusting_tharp, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 15 10:34:49 compute-0 podman[74003]: 2025-12-15 10:34:49.209987564 +0000 UTC m=+0.208102750 container start bfae2a40b0c571923e2c671e0c7436622e9b5446480d0b5138d5458f7005e26a (image=quay.io/ceph/ceph:v19, name=trusting_tharp, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:34:49 compute-0 podman[74003]: 2025-12-15 10:34:49.213552157 +0000 UTC m=+0.211667423 container attach bfae2a40b0c571923e2c671e0c7436622e9b5446480d0b5138d5458f7005e26a (image=quay.io/ceph/ceph:v19, name=trusting_tharp, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec 15 10:34:49 compute-0 ceph-mon[74002]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/572939042' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 15 10:34:49 compute-0 trusting_tharp[74058]:   cluster:
Dec 15 10:34:49 compute-0 trusting_tharp[74058]:     id:     77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:34:49 compute-0 trusting_tharp[74058]:     health: HEALTH_OK
Dec 15 10:34:49 compute-0 trusting_tharp[74058]:  
Dec 15 10:34:49 compute-0 trusting_tharp[74058]:   services:
Dec 15 10:34:49 compute-0 trusting_tharp[74058]:     mon: 1 daemons, quorum compute-0 (age 0.363512s)
Dec 15 10:34:49 compute-0 trusting_tharp[74058]:     mgr: no daemons active
Dec 15 10:34:49 compute-0 trusting_tharp[74058]:     osd: 0 osds: 0 up, 0 in
Dec 15 10:34:49 compute-0 trusting_tharp[74058]:  
Dec 15 10:34:49 compute-0 trusting_tharp[74058]:   data:
Dec 15 10:34:49 compute-0 trusting_tharp[74058]:     pools:   0 pools, 0 pgs
Dec 15 10:34:49 compute-0 trusting_tharp[74058]:     objects: 0 objects, 0 B
Dec 15 10:34:49 compute-0 trusting_tharp[74058]:     usage:   0 B used, 0 B / 0 B avail
Dec 15 10:34:49 compute-0 trusting_tharp[74058]:     pgs:     
Dec 15 10:34:49 compute-0 trusting_tharp[74058]:  
Dec 15 10:34:49 compute-0 systemd[1]: libpod-bfae2a40b0c571923e2c671e0c7436622e9b5446480d0b5138d5458f7005e26a.scope: Deactivated successfully.
Dec 15 10:34:49 compute-0 podman[74003]: 2025-12-15 10:34:49.444552404 +0000 UTC m=+0.442667590 container died bfae2a40b0c571923e2c671e0c7436622e9b5446480d0b5138d5458f7005e26a (image=quay.io/ceph/ceph:v19, name=trusting_tharp, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 15 10:34:49 compute-0 podman[74003]: 2025-12-15 10:34:49.492737364 +0000 UTC m=+0.490852550 container remove bfae2a40b0c571923e2c671e0c7436622e9b5446480d0b5138d5458f7005e26a (image=quay.io/ceph/ceph:v19, name=trusting_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 15 10:34:49 compute-0 systemd[1]: libpod-conmon-bfae2a40b0c571923e2c671e0c7436622e9b5446480d0b5138d5458f7005e26a.scope: Deactivated successfully.
Dec 15 10:34:49 compute-0 podman[74097]: 2025-12-15 10:34:49.561484967 +0000 UTC m=+0.045973480 container create 4dcf725fa9f36a1d0447e10fd34223d0c21c37c801f60cc45d7e6bc6b3bcbbbc (image=quay.io/ceph/ceph:v19, name=wonderful_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 15 10:34:49 compute-0 systemd[1]: Started libpod-conmon-4dcf725fa9f36a1d0447e10fd34223d0c21c37c801f60cc45d7e6bc6b3bcbbbc.scope.
Dec 15 10:34:49 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/806de03068cd1568ef5d63f66736ebec3e17ab221d92d47224ffadc3f8a16811/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/806de03068cd1568ef5d63f66736ebec3e17ab221d92d47224ffadc3f8a16811/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/806de03068cd1568ef5d63f66736ebec3e17ab221d92d47224ffadc3f8a16811/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/806de03068cd1568ef5d63f66736ebec3e17ab221d92d47224ffadc3f8a16811/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:49 compute-0 podman[74097]: 2025-12-15 10:34:49.538498267 +0000 UTC m=+0.022986820 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:34:49 compute-0 podman[74097]: 2025-12-15 10:34:49.663094435 +0000 UTC m=+0.147582988 container init 4dcf725fa9f36a1d0447e10fd34223d0c21c37c801f60cc45d7e6bc6b3bcbbbc (image=quay.io/ceph/ceph:v19, name=wonderful_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:34:49 compute-0 podman[74097]: 2025-12-15 10:34:49.668890149 +0000 UTC m=+0.153378662 container start 4dcf725fa9f36a1d0447e10fd34223d0c21c37c801f60cc45d7e6bc6b3bcbbbc (image=quay.io/ceph/ceph:v19, name=wonderful_bell, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:34:49 compute-0 podman[74097]: 2025-12-15 10:34:49.671940596 +0000 UTC m=+0.156429139 container attach 4dcf725fa9f36a1d0447e10fd34223d0c21c37c801f60cc45d7e6bc6b3bcbbbc (image=quay.io/ceph/ceph:v19, name=wonderful_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 15 10:34:49 compute-0 ceph-mon[74002]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 15 10:34:49 compute-0 ceph-mon[74002]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/853548857' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 15 10:34:49 compute-0 ceph-mon[74002]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/853548857' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 15 10:34:49 compute-0 wonderful_bell[74113]: 
Dec 15 10:34:49 compute-0 wonderful_bell[74113]: [global]
Dec 15 10:34:49 compute-0 wonderful_bell[74113]:         fsid = 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:34:49 compute-0 wonderful_bell[74113]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec 15 10:34:49 compute-0 systemd[1]: libpod-4dcf725fa9f36a1d0447e10fd34223d0c21c37c801f60cc45d7e6bc6b3bcbbbc.scope: Deactivated successfully.
Dec 15 10:34:49 compute-0 podman[74139]: 2025-12-15 10:34:49.925825919 +0000 UTC m=+0.036396987 container died 4dcf725fa9f36a1d0447e10fd34223d0c21c37c801f60cc45d7e6bc6b3bcbbbc (image=quay.io/ceph/ceph:v19, name=wonderful_bell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:34:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-806de03068cd1568ef5d63f66736ebec3e17ab221d92d47224ffadc3f8a16811-merged.mount: Deactivated successfully.
Dec 15 10:34:50 compute-0 podman[74139]: 2025-12-15 10:34:50.005487109 +0000 UTC m=+0.116058127 container remove 4dcf725fa9f36a1d0447e10fd34223d0c21c37c801f60cc45d7e6bc6b3bcbbbc (image=quay.io/ceph/ceph:v19, name=wonderful_bell, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 15 10:34:50 compute-0 systemd[1]: libpod-conmon-4dcf725fa9f36a1d0447e10fd34223d0c21c37c801f60cc45d7e6bc6b3bcbbbc.scope: Deactivated successfully.
Dec 15 10:34:50 compute-0 podman[74151]: 2025-12-15 10:34:50.073919602 +0000 UTC m=+0.044356640 container create 8662f4b5a6eb830b6b2d0fedea1a432f649b8144604f47b28b2f99d682a229ba (image=quay.io/ceph/ceph:v19, name=ecstatic_feistel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:34:50 compute-0 systemd[1]: Started libpod-conmon-8662f4b5a6eb830b6b2d0fedea1a432f649b8144604f47b28b2f99d682a229ba.scope.
Dec 15 10:34:50 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:34:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86e7b8b93d8eb7b10cc3fe3783bbdd56589a7d4d22abbcb83880b9cbb39b111b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86e7b8b93d8eb7b10cc3fe3783bbdd56589a7d4d22abbcb83880b9cbb39b111b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86e7b8b93d8eb7b10cc3fe3783bbdd56589a7d4d22abbcb83880b9cbb39b111b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86e7b8b93d8eb7b10cc3fe3783bbdd56589a7d4d22abbcb83880b9cbb39b111b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:50 compute-0 podman[74151]: 2025-12-15 10:34:50.145395192 +0000 UTC m=+0.115832260 container init 8662f4b5a6eb830b6b2d0fedea1a432f649b8144604f47b28b2f99d682a229ba (image=quay.io/ceph/ceph:v19, name=ecstatic_feistel, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:34:50 compute-0 podman[74151]: 2025-12-15 10:34:50.053361158 +0000 UTC m=+0.023798226 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:34:50 compute-0 podman[74151]: 2025-12-15 10:34:50.151355391 +0000 UTC m=+0.121792429 container start 8662f4b5a6eb830b6b2d0fedea1a432f649b8144604f47b28b2f99d682a229ba (image=quay.io/ceph/ceph:v19, name=ecstatic_feistel, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:34:50 compute-0 podman[74151]: 2025-12-15 10:34:50.15540896 +0000 UTC m=+0.125846018 container attach 8662f4b5a6eb830b6b2d0fedea1a432f649b8144604f47b28b2f99d682a229ba (image=quay.io/ceph/ceph:v19, name=ecstatic_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 15 10:34:50 compute-0 ceph-mon[74002]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 15 10:34:50 compute-0 ceph-mon[74002]: monmap epoch 1
Dec 15 10:34:50 compute-0 ceph-mon[74002]: fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:34:50 compute-0 ceph-mon[74002]: last_changed 2025-12-15T10:34:45.470940+0000
Dec 15 10:34:50 compute-0 ceph-mon[74002]: created 2025-12-15T10:34:45.470940+0000
Dec 15 10:34:50 compute-0 ceph-mon[74002]: min_mon_release 19 (squid)
Dec 15 10:34:50 compute-0 ceph-mon[74002]: election_strategy: 1
Dec 15 10:34:50 compute-0 ceph-mon[74002]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 15 10:34:50 compute-0 ceph-mon[74002]: fsmap 
Dec 15 10:34:50 compute-0 ceph-mon[74002]: osdmap e1: 0 total, 0 up, 0 in
Dec 15 10:34:50 compute-0 ceph-mon[74002]: mgrmap e1: no daemons active
Dec 15 10:34:50 compute-0 ceph-mon[74002]: from='client.? 192.168.122.100:0/572939042' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 15 10:34:50 compute-0 ceph-mon[74002]: from='client.? 192.168.122.100:0/853548857' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 15 10:34:50 compute-0 ceph-mon[74002]: from='client.? 192.168.122.100:0/853548857' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 15 10:34:50 compute-0 ceph-mon[74002]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:34:50 compute-0 ceph-mon[74002]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1563133565' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:34:50 compute-0 systemd[1]: libpod-8662f4b5a6eb830b6b2d0fedea1a432f649b8144604f47b28b2f99d682a229ba.scope: Deactivated successfully.
Dec 15 10:34:50 compute-0 podman[74151]: 2025-12-15 10:34:50.347037336 +0000 UTC m=+0.317474384 container died 8662f4b5a6eb830b6b2d0fedea1a432f649b8144604f47b28b2f99d682a229ba (image=quay.io/ceph/ceph:v19, name=ecstatic_feistel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 15 10:34:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-86e7b8b93d8eb7b10cc3fe3783bbdd56589a7d4d22abbcb83880b9cbb39b111b-merged.mount: Deactivated successfully.
Dec 15 10:34:50 compute-0 podman[74151]: 2025-12-15 10:34:50.394276546 +0000 UTC m=+0.364713594 container remove 8662f4b5a6eb830b6b2d0fedea1a432f649b8144604f47b28b2f99d682a229ba (image=quay.io/ceph/ceph:v19, name=ecstatic_feistel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 15 10:34:50 compute-0 systemd[1]: libpod-conmon-8662f4b5a6eb830b6b2d0fedea1a432f649b8144604f47b28b2f99d682a229ba.scope: Deactivated successfully.
Dec 15 10:34:50 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 77365f67-614e-5a8d-b658-640395550c79...
Dec 15 10:34:50 compute-0 ceph-mon[74002]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 15 10:34:50 compute-0 ceph-mon[74002]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 15 10:34:50 compute-0 ceph-mon[74002]: mon.compute-0@0(leader) e1 shutdown
Dec 15 10:34:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0[73998]: 2025-12-15T10:34:50.575+0000 7f89e6da2640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 15 10:34:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0[73998]: 2025-12-15T10:34:50.575+0000 7f89e6da2640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 15 10:34:50 compute-0 ceph-mon[74002]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 15 10:34:50 compute-0 ceph-mon[74002]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 15 10:34:50 compute-0 podman[74234]: 2025-12-15 10:34:50.70014214 +0000 UTC m=+0.161323404 container died 97d0713e4f1e0bfa7725536e0f6d871684976d8b62cc8ca81e2420deb933c80c (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:34:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d1b6c27223915ba7c7076ccc131940a2ae5972a8acb6e72af919666ff5f8f65-merged.mount: Deactivated successfully.
Dec 15 10:34:50 compute-0 podman[74234]: 2025-12-15 10:34:50.731856447 +0000 UTC m=+0.193037711 container remove 97d0713e4f1e0bfa7725536e0f6d871684976d8b62cc8ca81e2420deb933c80c (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:34:50 compute-0 bash[74234]: ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0
Dec 15 10:34:50 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 15 10:34:50 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 15 10:34:50 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 15 10:34:50 compute-0 systemd[1]: ceph-77365f67-614e-5a8d-b658-640395550c79@mon.compute-0.service: Deactivated successfully.
Dec 15 10:34:50 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 77365f67-614e-5a8d-b658-640395550c79.
Dec 15 10:34:50 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 77365f67-614e-5a8d-b658-640395550c79...
Dec 15 10:34:51 compute-0 podman[74336]: 2025-12-15 10:34:51.070274255 +0000 UTC m=+0.039660731 container create 79ed8dc51b1852fd831764c2817cfa3745ae4937bc9015bdb965e376ab3cc58d (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:34:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bad1be1c182af68c4864170e0919e59229b2adf15f7d39f465c857275c11b36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bad1be1c182af68c4864170e0919e59229b2adf15f7d39f465c857275c11b36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bad1be1c182af68c4864170e0919e59229b2adf15f7d39f465c857275c11b36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bad1be1c182af68c4864170e0919e59229b2adf15f7d39f465c857275c11b36/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:51 compute-0 podman[74336]: 2025-12-15 10:34:51.123930629 +0000 UTC m=+0.093317115 container init 79ed8dc51b1852fd831764c2817cfa3745ae4937bc9015bdb965e376ab3cc58d (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 15 10:34:51 compute-0 podman[74336]: 2025-12-15 10:34:51.132694738 +0000 UTC m=+0.102081214 container start 79ed8dc51b1852fd831764c2817cfa3745ae4937bc9015bdb965e376ab3cc58d (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:34:51 compute-0 bash[74336]: 79ed8dc51b1852fd831764c2817cfa3745ae4937bc9015bdb965e376ab3cc58d
Dec 15 10:34:51 compute-0 podman[74336]: 2025-12-15 10:34:51.052485881 +0000 UTC m=+0.021872387 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:34:51 compute-0 systemd[1]: Started Ceph mon.compute-0 for 77365f67-614e-5a8d-b658-640395550c79.
Dec 15 10:34:51 compute-0 ceph-mon[74356]: set uid:gid to 167:167 (ceph:ceph)
Dec 15 10:34:51 compute-0 ceph-mon[74356]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Dec 15 10:34:51 compute-0 ceph-mon[74356]: pidfile_write: ignore empty --pid-file
Dec 15 10:34:51 compute-0 ceph-mon[74356]: load: jerasure load: lrc 
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: RocksDB version: 7.9.2
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: Git sha 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: DB SUMMARY
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: DB Session ID:  8WPMBXYVT9DSSQWRN3T3
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: CURRENT file:  CURRENT
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: IDENTITY file:  IDENTITY
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 58729 ; 
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                         Options.error_if_exists: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                       Options.create_if_missing: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                         Options.paranoid_checks: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                                     Options.env: 0x559a6ff84c20
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                                Options.info_log: 0x559a713f1ac0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                Options.max_file_opening_threads: 16
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                              Options.statistics: (nil)
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                               Options.use_fsync: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                       Options.max_log_file_size: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                         Options.allow_fallocate: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                        Options.use_direct_reads: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:          Options.create_missing_column_families: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                              Options.db_log_dir: 
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                                 Options.wal_dir: 
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                   Options.advise_random_on_open: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                    Options.write_buffer_manager: 0x559a713f5900
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                            Options.rate_limiter: (nil)
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                  Options.unordered_write: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                               Options.row_cache: None
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                              Options.wal_filter: None
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:             Options.allow_ingest_behind: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:             Options.two_write_queues: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:             Options.manual_wal_flush: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:             Options.wal_compression: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:             Options.atomic_flush: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                 Options.log_readahead_size: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:             Options.allow_data_in_errors: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:             Options.db_host_id: __hostname__
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:             Options.max_background_jobs: 2
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:             Options.max_background_compactions: -1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:             Options.max_subcompactions: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:             Options.max_total_wal_size: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                          Options.max_open_files: -1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                          Options.bytes_per_sync: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:       Options.compaction_readahead_size: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                  Options.max_background_flushes: -1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: Compression algorithms supported:
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:         kZSTD supported: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:         kXpressCompression supported: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:         kBZip2Compression supported: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:         kLZ4Compression supported: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:         kZlibCompression supported: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:         kLZ4HCCompression supported: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:         kSnappyCompression supported: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:           Options.merge_operator: 
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:        Options.compaction_filter: None
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:        Options.compaction_filter_factory: None
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:  Options.sst_partitioner_factory: None
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a713f0aa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559a71415350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:        Options.write_buffer_size: 33554432
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:  Options.max_write_buffer_number: 2
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:          Options.compression: NoCompression
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:       Options.prefix_extractor: nullptr
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:             Options.num_levels: 7
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                  Options.compression_opts.level: 32767
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:               Options.compression_opts.strategy: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                  Options.compression_opts.enabled: false
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                        Options.arena_block_size: 1048576
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                Options.disable_auto_compactions: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                   Options.inplace_update_support: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                           Options.bloom_locality: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                    Options.max_successive_merges: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                Options.paranoid_file_checks: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                Options.force_consistency_checks: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                Options.report_bg_io_stats: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                               Options.ttl: 2592000
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                       Options.enable_blob_files: false
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                           Options.min_blob_size: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                          Options.blob_file_size: 268435456
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb:                Options.blob_file_starting_level: 0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d36e3d93-cef6-4482-9c71-0054ae87e0c9
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765794891181358, "job": 1, "event": "recovery_started", "wal_files": [9]}
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765794891186142, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 58480, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 56954, "index_size": 168, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3182, "raw_average_key_size": 30, "raw_value_size": 54471, "raw_average_value_size": 523, "num_data_blocks": 9, "num_entries": 104, "num_filter_entries": 104, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765794891, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d36e3d93-cef6-4482-9c71-0054ae87e0c9", "db_session_id": "8WPMBXYVT9DSSQWRN3T3", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765794891186319, "job": 1, "event": "recovery_finished"}
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559a71416e00
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: DB pointer 0x559a71520000
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 15 10:34:51 compute-0 ceph-mon[74356]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   59.01 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0   59.01 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 2.90 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 2.90 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a71415350#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 15 10:34:51 compute-0 ceph-mon[74356]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:34:51 compute-0 ceph-mon[74356]: mon.compute-0@-1(???) e1 preinit fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:34:51 compute-0 ceph-mon[74356]: mon.compute-0@-1(???).mds e1 new map
Dec 15 10:34:51 compute-0 ceph-mon[74356]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2025-12-15T10:34:49:065673+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Dec 15 10:34:51 compute-0 ceph-mon[74356]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 15 10:34:51 compute-0 ceph-mon[74356]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 15 10:34:51 compute-0 ceph-mon[74356]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 15 10:34:51 compute-0 ceph-mon[74356]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 15 10:34:51 compute-0 ceph-mon[74356]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Dec 15 10:34:51 compute-0 ceph-mon[74356]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Dec 15 10:34:51 compute-0 ceph-mon[74356]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 15 10:34:51 compute-0 ceph-mon[74356]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Dec 15 10:34:51 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 15 10:34:51 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 15 10:34:51 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:34:51 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : last_changed 2025-12-15T10:34:45.470940+0000
Dec 15 10:34:51 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : created 2025-12-15T10:34:45.470940+0000
Dec 15 10:34:51 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec 15 10:34:51 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 15 10:34:51 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : fsmap 
Dec 15 10:34:51 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 15 10:34:51 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 15 10:34:51 compute-0 podman[74357]: 2025-12-15 10:34:51.215609781 +0000 UTC m=+0.049376719 container create 1efc1af1c30929579ace02a9981de528230c53465c4828840ad8229c8f14f518 (image=quay.io/ceph/ceph:v19, name=nifty_poincare, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:34:51 compute-0 systemd[1]: Started libpod-conmon-1efc1af1c30929579ace02a9981de528230c53465c4828840ad8229c8f14f518.scope.
Dec 15 10:34:51 compute-0 ceph-mon[74356]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 15 10:34:51 compute-0 ceph-mon[74356]: monmap epoch 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:34:51 compute-0 ceph-mon[74356]: last_changed 2025-12-15T10:34:45.470940+0000
Dec 15 10:34:51 compute-0 ceph-mon[74356]: created 2025-12-15T10:34:45.470940+0000
Dec 15 10:34:51 compute-0 ceph-mon[74356]: min_mon_release 19 (squid)
Dec 15 10:34:51 compute-0 ceph-mon[74356]: election_strategy: 1
Dec 15 10:34:51 compute-0 ceph-mon[74356]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 15 10:34:51 compute-0 ceph-mon[74356]: fsmap 
Dec 15 10:34:51 compute-0 ceph-mon[74356]: osdmap e1: 0 total, 0 up, 0 in
Dec 15 10:34:51 compute-0 ceph-mon[74356]: mgrmap e1: no daemons active
Dec 15 10:34:51 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:34:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6504b7817df2f1b2e6b1f73ba216c0fc7cd7fe2d8634596afa7a66a10ec98cb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6504b7817df2f1b2e6b1f73ba216c0fc7cd7fe2d8634596afa7a66a10ec98cb9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6504b7817df2f1b2e6b1f73ba216c0fc7cd7fe2d8634596afa7a66a10ec98cb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:51 compute-0 podman[74357]: 2025-12-15 10:34:51.193682584 +0000 UTC m=+0.027449612 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:34:51 compute-0 podman[74357]: 2025-12-15 10:34:51.293063811 +0000 UTC m=+0.126830779 container init 1efc1af1c30929579ace02a9981de528230c53465c4828840ad8229c8f14f518 (image=quay.io/ceph/ceph:v19, name=nifty_poincare, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:34:51 compute-0 podman[74357]: 2025-12-15 10:34:51.298501164 +0000 UTC m=+0.132268112 container start 1efc1af1c30929579ace02a9981de528230c53465c4828840ad8229c8f14f518 (image=quay.io/ceph/ceph:v19, name=nifty_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:34:51 compute-0 podman[74357]: 2025-12-15 10:34:51.301374855 +0000 UTC m=+0.135141803 container attach 1efc1af1c30929579ace02a9981de528230c53465c4828840ad8229c8f14f518 (image=quay.io/ceph/ceph:v19, name=nifty_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:34:51 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Dec 15 10:34:51 compute-0 systemd[1]: libpod-1efc1af1c30929579ace02a9981de528230c53465c4828840ad8229c8f14f518.scope: Deactivated successfully.
Dec 15 10:34:51 compute-0 podman[74357]: 2025-12-15 10:34:51.501661536 +0000 UTC m=+0.335428484 container died 1efc1af1c30929579ace02a9981de528230c53465c4828840ad8229c8f14f518 (image=quay.io/ceph/ceph:v19, name=nifty_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Dec 15 10:34:51 compute-0 podman[74357]: 2025-12-15 10:34:51.534360374 +0000 UTC m=+0.368127322 container remove 1efc1af1c30929579ace02a9981de528230c53465c4828840ad8229c8f14f518 (image=quay.io/ceph/ceph:v19, name=nifty_poincare, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:34:51 compute-0 systemd[1]: libpod-conmon-1efc1af1c30929579ace02a9981de528230c53465c4828840ad8229c8f14f518.scope: Deactivated successfully.
Dec 15 10:34:51 compute-0 podman[74449]: 2025-12-15 10:34:51.593478141 +0000 UTC m=+0.037646376 container create ee30abaaac6c37b09150bbe746e2eb65d52942c1efe51223751fd8ccb3f52013 (image=quay.io/ceph/ceph:v19, name=hardcore_kilby, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:34:51 compute-0 systemd[1]: Started libpod-conmon-ee30abaaac6c37b09150bbe746e2eb65d52942c1efe51223751fd8ccb3f52013.scope.
Dec 15 10:34:51 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:34:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf481d4d25d1f19c2d190f1dce4c98fdaf9911fa7d6819d9071c49945ba1b0ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf481d4d25d1f19c2d190f1dce4c98fdaf9911fa7d6819d9071c49945ba1b0ad/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf481d4d25d1f19c2d190f1dce4c98fdaf9911fa7d6819d9071c49945ba1b0ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:51 compute-0 podman[74449]: 2025-12-15 10:34:51.659056785 +0000 UTC m=+0.103225060 container init ee30abaaac6c37b09150bbe746e2eb65d52942c1efe51223751fd8ccb3f52013 (image=quay.io/ceph/ceph:v19, name=hardcore_kilby, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 15 10:34:51 compute-0 podman[74449]: 2025-12-15 10:34:51.666893653 +0000 UTC m=+0.111061898 container start ee30abaaac6c37b09150bbe746e2eb65d52942c1efe51223751fd8ccb3f52013 (image=quay.io/ceph/ceph:v19, name=hardcore_kilby, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 15 10:34:51 compute-0 podman[74449]: 2025-12-15 10:34:51.676083665 +0000 UTC m=+0.120251910 container attach ee30abaaac6c37b09150bbe746e2eb65d52942c1efe51223751fd8ccb3f52013 (image=quay.io/ceph/ceph:v19, name=hardcore_kilby, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 15 10:34:51 compute-0 podman[74449]: 2025-12-15 10:34:51.57926294 +0000 UTC m=+0.023431205 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:34:51 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Dec 15 10:34:51 compute-0 systemd[1]: libpod-ee30abaaac6c37b09150bbe746e2eb65d52942c1efe51223751fd8ccb3f52013.scope: Deactivated successfully.
Dec 15 10:34:51 compute-0 podman[74449]: 2025-12-15 10:34:51.865702077 +0000 UTC m=+0.309870322 container died ee30abaaac6c37b09150bbe746e2eb65d52942c1efe51223751fd8ccb3f52013 (image=quay.io/ceph/ceph:v19, name=hardcore_kilby, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 15 10:34:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf481d4d25d1f19c2d190f1dce4c98fdaf9911fa7d6819d9071c49945ba1b0ad-merged.mount: Deactivated successfully.
Dec 15 10:34:51 compute-0 podman[74449]: 2025-12-15 10:34:51.923672088 +0000 UTC m=+0.367840333 container remove ee30abaaac6c37b09150bbe746e2eb65d52942c1efe51223751fd8ccb3f52013 (image=quay.io/ceph/ceph:v19, name=hardcore_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 15 10:34:51 compute-0 systemd[1]: libpod-conmon-ee30abaaac6c37b09150bbe746e2eb65d52942c1efe51223751fd8ccb3f52013.scope: Deactivated successfully.
Dec 15 10:34:52 compute-0 systemd[1]: Reloading.
Dec 15 10:34:52 compute-0 systemd-sysv-generator[74533]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:34:52 compute-0 systemd-rc-local-generator[74529]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:34:52 compute-0 systemd[1]: Reloading.
Dec 15 10:34:52 compute-0 systemd-rc-local-generator[74570]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:34:52 compute-0 systemd-sysv-generator[74573]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:34:52 compute-0 systemd[1]: Starting Ceph mgr.compute-0.difmqj for 77365f67-614e-5a8d-b658-640395550c79...
Dec 15 10:34:52 compute-0 podman[74631]: 2025-12-15 10:34:52.896324949 +0000 UTC m=+0.059820531 container create fd677f2590c3ecaa742bb311fe494712e563f26d09ecad50c459f279a65b4af6 (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 15 10:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2b1472bcefdbfd0822a6781567d810609c275c4437a3a7037caf7ec86bc1069/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2b1472bcefdbfd0822a6781567d810609c275c4437a3a7037caf7ec86bc1069/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2b1472bcefdbfd0822a6781567d810609c275c4437a3a7037caf7ec86bc1069/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2b1472bcefdbfd0822a6781567d810609c275c4437a3a7037caf7ec86bc1069/merged/var/lib/ceph/mgr/ceph-compute-0.difmqj supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:52 compute-0 podman[74631]: 2025-12-15 10:34:52.861614396 +0000 UTC m=+0.025110008 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:34:52 compute-0 podman[74631]: 2025-12-15 10:34:52.957159491 +0000 UTC m=+0.120655143 container init fd677f2590c3ecaa742bb311fe494712e563f26d09ecad50c459f279a65b4af6 (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 15 10:34:52 compute-0 podman[74631]: 2025-12-15 10:34:52.964212885 +0000 UTC m=+0.127708467 container start fd677f2590c3ecaa742bb311fe494712e563f26d09ecad50c459f279a65b4af6 (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:34:52 compute-0 bash[74631]: fd677f2590c3ecaa742bb311fe494712e563f26d09ecad50c459f279a65b4af6
Dec 15 10:34:52 compute-0 systemd[1]: Started Ceph mgr.compute-0.difmqj for 77365f67-614e-5a8d-b658-640395550c79.
Dec 15 10:34:53 compute-0 ceph-mgr[74651]: set uid:gid to 167:167 (ceph:ceph)
Dec 15 10:34:53 compute-0 ceph-mgr[74651]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 15 10:34:53 compute-0 ceph-mgr[74651]: pidfile_write: ignore empty --pid-file
Dec 15 10:34:53 compute-0 podman[74652]: 2025-12-15 10:34:53.03993179 +0000 UTC m=+0.041036945 container create abc28af44e9279262f52c77624eecd721caff838214baea2a97d1012c0222aaa (image=quay.io/ceph/ceph:v19, name=blissful_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:34:53 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'alerts'
Dec 15 10:34:53 compute-0 systemd[1]: Started libpod-conmon-abc28af44e9279262f52c77624eecd721caff838214baea2a97d1012c0222aaa.scope.
Dec 15 10:34:53 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de417bbf9b286496f3666040643776d04c8fcf75cd57d90d5bdc85a267a77145/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de417bbf9b286496f3666040643776d04c8fcf75cd57d90d5bdc85a267a77145/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de417bbf9b286496f3666040643776d04c8fcf75cd57d90d5bdc85a267a77145/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:53 compute-0 podman[74652]: 2025-12-15 10:34:53.022262749 +0000 UTC m=+0.023367924 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:34:53 compute-0 podman[74652]: 2025-12-15 10:34:53.130709093 +0000 UTC m=+0.131814268 container init abc28af44e9279262f52c77624eecd721caff838214baea2a97d1012c0222aaa (image=quay.io/ceph/ceph:v19, name=blissful_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 15 10:34:53 compute-0 podman[74652]: 2025-12-15 10:34:53.139476061 +0000 UTC m=+0.140581216 container start abc28af44e9279262f52c77624eecd721caff838214baea2a97d1012c0222aaa (image=quay.io/ceph/ceph:v19, name=blissful_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 15 10:34:53 compute-0 ceph-mgr[74651]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 15 10:34:53 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'balancer'
Dec 15 10:34:53 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:34:53.150+0000 7fd43c760140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 15 10:34:53 compute-0 podman[74652]: 2025-12-15 10:34:53.142682473 +0000 UTC m=+0.143787628 container attach abc28af44e9279262f52c77624eecd721caff838214baea2a97d1012c0222aaa (image=quay.io/ceph/ceph:v19, name=blissful_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS)
Dec 15 10:34:53 compute-0 ceph-mgr[74651]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 15 10:34:53 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'cephadm'
Dec 15 10:34:53 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:34:53.230+0000 7fd43c760140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 15 10:34:53 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 15 10:34:53 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3472830242' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 15 10:34:53 compute-0 blissful_thompson[74689]: 
Dec 15 10:34:53 compute-0 blissful_thompson[74689]: {
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:     "fsid": "77365f67-614e-5a8d-b658-640395550c79",
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:     "health": {
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "status": "HEALTH_OK",
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "checks": {},
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "mutes": []
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:     },
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:     "election_epoch": 5,
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:     "quorum": [
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         0
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:     ],
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:     "quorum_names": [
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "compute-0"
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:     ],
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:     "quorum_age": 2,
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:     "monmap": {
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "epoch": 1,
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "min_mon_release_name": "squid",
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "num_mons": 1
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:     },
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:     "osdmap": {
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "epoch": 1,
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "num_osds": 0,
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "num_up_osds": 0,
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "osd_up_since": 0,
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "num_in_osds": 0,
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "osd_in_since": 0,
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "num_remapped_pgs": 0
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:     },
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:     "pgmap": {
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "pgs_by_state": [],
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "num_pgs": 0,
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "num_pools": 0,
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "num_objects": 0,
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "data_bytes": 0,
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "bytes_used": 0,
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "bytes_avail": 0,
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "bytes_total": 0
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:     },
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:     "fsmap": {
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "epoch": 1,
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "btime": "2025-12-15T10:34:49:065673+0000",
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "by_rank": [],
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "up:standby": 0
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:     },
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:     "mgrmap": {
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "available": false,
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "num_standbys": 0,
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "modules": [
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:             "iostat",
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:             "nfs",
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:             "restful"
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         ],
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "services": {}
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:     },
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:     "servicemap": {
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "epoch": 1,
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "modified": "2025-12-15T10:34:49.067589+0000",
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:         "services": {}
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:     },
Dec 15 10:34:53 compute-0 blissful_thompson[74689]:     "progress_events": {}
Dec 15 10:34:53 compute-0 blissful_thompson[74689]: }
Dec 15 10:34:53 compute-0 systemd[1]: libpod-abc28af44e9279262f52c77624eecd721caff838214baea2a97d1012c0222aaa.scope: Deactivated successfully.
Dec 15 10:34:53 compute-0 podman[74715]: 2025-12-15 10:34:53.383608845 +0000 UTC m=+0.024198500 container died abc28af44e9279262f52c77624eecd721caff838214baea2a97d1012c0222aaa (image=quay.io/ceph/ceph:v19, name=blissful_thompson, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:34:53 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/3472830242' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 15 10:34:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-de417bbf9b286496f3666040643776d04c8fcf75cd57d90d5bdc85a267a77145-merged.mount: Deactivated successfully.
Dec 15 10:34:53 compute-0 podman[74715]: 2025-12-15 10:34:53.423528273 +0000 UTC m=+0.064117918 container remove abc28af44e9279262f52c77624eecd721caff838214baea2a97d1012c0222aaa (image=quay.io/ceph/ceph:v19, name=blissful_thompson, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:34:53 compute-0 systemd[1]: libpod-conmon-abc28af44e9279262f52c77624eecd721caff838214baea2a97d1012c0222aaa.scope: Deactivated successfully.
Dec 15 10:34:54 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'crash'
Dec 15 10:34:54 compute-0 ceph-mgr[74651]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 15 10:34:54 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'dashboard'
Dec 15 10:34:54 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:34:54.094+0000 7fd43c760140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 15 10:34:54 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'devicehealth'
Dec 15 10:34:54 compute-0 ceph-mgr[74651]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 15 10:34:54 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'diskprediction_local'
Dec 15 10:34:54 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:34:54.746+0000 7fd43c760140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 15 10:34:54 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 15 10:34:54 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 15 10:34:54 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]:   from numpy import show_config as show_numpy_config
Dec 15 10:34:54 compute-0 ceph-mgr[74651]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 15 10:34:54 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'influx'
Dec 15 10:34:54 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:34:54.907+0000 7fd43c760140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 15 10:34:54 compute-0 ceph-mgr[74651]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 15 10:34:54 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'insights'
Dec 15 10:34:54 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:34:54.984+0000 7fd43c760140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 15 10:34:55 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'iostat'
Dec 15 10:34:55 compute-0 ceph-mgr[74651]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 15 10:34:55 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'k8sevents'
Dec 15 10:34:55 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:34:55.130+0000 7fd43c760140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 15 10:34:55 compute-0 podman[74741]: 2025-12-15 10:34:55.506547458 +0000 UTC m=+0.047551111 container create 8060d8a7b2d3bb11378cc20b62efaadd2ed82fe65dacdb5605b17caf765f5a45 (image=quay.io/ceph/ceph:v19, name=affectionate_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 15 10:34:55 compute-0 systemd[1]: Started libpod-conmon-8060d8a7b2d3bb11378cc20b62efaadd2ed82fe65dacdb5605b17caf765f5a45.scope.
Dec 15 10:34:55 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/375828d1e8fbaeff6f53a49bc11bcbbbd49dd3fcb08a8d4d72157c0dbc63c2e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/375828d1e8fbaeff6f53a49bc11bcbbbd49dd3fcb08a8d4d72157c0dbc63c2e8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/375828d1e8fbaeff6f53a49bc11bcbbbd49dd3fcb08a8d4d72157c0dbc63c2e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:55 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'localpool'
Dec 15 10:34:55 compute-0 podman[74741]: 2025-12-15 10:34:55.580955801 +0000 UTC m=+0.121959494 container init 8060d8a7b2d3bb11378cc20b62efaadd2ed82fe65dacdb5605b17caf765f5a45 (image=quay.io/ceph/ceph:v19, name=affectionate_rhodes, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 15 10:34:55 compute-0 podman[74741]: 2025-12-15 10:34:55.489581509 +0000 UTC m=+0.030585192 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:34:55 compute-0 podman[74741]: 2025-12-15 10:34:55.585681991 +0000 UTC m=+0.126685654 container start 8060d8a7b2d3bb11378cc20b62efaadd2ed82fe65dacdb5605b17caf765f5a45 (image=quay.io/ceph/ceph:v19, name=affectionate_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True)
Dec 15 10:34:55 compute-0 podman[74741]: 2025-12-15 10:34:55.58848658 +0000 UTC m=+0.129490243 container attach 8060d8a7b2d3bb11378cc20b62efaadd2ed82fe65dacdb5605b17caf765f5a45 (image=quay.io/ceph/ceph:v19, name=affectionate_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:34:55 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'mds_autoscaler'
Dec 15 10:34:55 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 15 10:34:55 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2778371178' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]: 
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]: {
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:     "fsid": "77365f67-614e-5a8d-b658-640395550c79",
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:     "health": {
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "status": "HEALTH_OK",
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "checks": {},
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "mutes": []
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:     },
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:     "election_epoch": 5,
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:     "quorum": [
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         0
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:     ],
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:     "quorum_names": [
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "compute-0"
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:     ],
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:     "quorum_age": 4,
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:     "monmap": {
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "epoch": 1,
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "min_mon_release_name": "squid",
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "num_mons": 1
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:     },
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:     "osdmap": {
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "epoch": 1,
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "num_osds": 0,
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "num_up_osds": 0,
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "osd_up_since": 0,
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "num_in_osds": 0,
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "osd_in_since": 0,
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "num_remapped_pgs": 0
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:     },
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:     "pgmap": {
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "pgs_by_state": [],
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "num_pgs": 0,
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "num_pools": 0,
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "num_objects": 0,
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "data_bytes": 0,
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "bytes_used": 0,
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "bytes_avail": 0,
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "bytes_total": 0
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:     },
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:     "fsmap": {
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "epoch": 1,
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "btime": "2025-12-15T10:34:49:065673+0000",
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "by_rank": [],
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "up:standby": 0
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:     },
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:     "mgrmap": {
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "available": false,
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "num_standbys": 0,
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "modules": [
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:             "iostat",
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:             "nfs",
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:             "restful"
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         ],
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "services": {}
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:     },
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:     "servicemap": {
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "epoch": 1,
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "modified": "2025-12-15T10:34:49.067589+0000",
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:         "services": {}
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:     },
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]:     "progress_events": {}
Dec 15 10:34:55 compute-0 affectionate_rhodes[74757]: }
Dec 15 10:34:55 compute-0 systemd[1]: libpod-8060d8a7b2d3bb11378cc20b62efaadd2ed82fe65dacdb5605b17caf765f5a45.scope: Deactivated successfully.
Dec 15 10:34:55 compute-0 podman[74741]: 2025-12-15 10:34:55.828561565 +0000 UTC m=+0.369565228 container died 8060d8a7b2d3bb11378cc20b62efaadd2ed82fe65dacdb5605b17caf765f5a45 (image=quay.io/ceph/ceph:v19, name=affectionate_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 15 10:34:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-375828d1e8fbaeff6f53a49bc11bcbbbd49dd3fcb08a8d4d72157c0dbc63c2e8-merged.mount: Deactivated successfully.
Dec 15 10:34:55 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/2778371178' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 15 10:34:55 compute-0 podman[74741]: 2025-12-15 10:34:55.869793674 +0000 UTC m=+0.410797337 container remove 8060d8a7b2d3bb11378cc20b62efaadd2ed82fe65dacdb5605b17caf765f5a45 (image=quay.io/ceph/ceph:v19, name=affectionate_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 15 10:34:55 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'mirroring'
Dec 15 10:34:55 compute-0 systemd[1]: libpod-conmon-8060d8a7b2d3bb11378cc20b62efaadd2ed82fe65dacdb5605b17caf765f5a45.scope: Deactivated successfully.
Dec 15 10:34:55 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'nfs'
Dec 15 10:34:56 compute-0 ceph-mgr[74651]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 15 10:34:56 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'orchestrator'
Dec 15 10:34:56 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:34:56.218+0000 7fd43c760140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 15 10:34:56 compute-0 ceph-mgr[74651]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 15 10:34:56 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'osd_perf_query'
Dec 15 10:34:56 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:34:56.447+0000 7fd43c760140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 15 10:34:56 compute-0 ceph-mgr[74651]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 15 10:34:56 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'osd_support'
Dec 15 10:34:56 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:34:56.525+0000 7fd43c760140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 15 10:34:56 compute-0 ceph-mgr[74651]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 15 10:34:56 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'pg_autoscaler'
Dec 15 10:34:56 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:34:56.594+0000 7fd43c760140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 15 10:34:56 compute-0 ceph-mgr[74651]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 15 10:34:56 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'progress'
Dec 15 10:34:56 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:34:56.676+0000 7fd43c760140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 15 10:34:56 compute-0 ceph-mgr[74651]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 15 10:34:56 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'prometheus'
Dec 15 10:34:56 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:34:56.748+0000 7fd43c760140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 15 10:34:57 compute-0 ceph-mgr[74651]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 15 10:34:57 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:34:57.115+0000 7fd43c760140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 15 10:34:57 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'rbd_support'
Dec 15 10:34:57 compute-0 ceph-mgr[74651]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 15 10:34:57 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'restful'
Dec 15 10:34:57 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:34:57.224+0000 7fd43c760140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 15 10:34:57 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'rgw'
Dec 15 10:34:57 compute-0 ceph-mgr[74651]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 15 10:34:57 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:34:57.691+0000 7fd43c760140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 15 10:34:57 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'rook'
Dec 15 10:34:57 compute-0 podman[74795]: 2025-12-15 10:34:57.992963484 +0000 UTC m=+0.095249416 container create 8bbb4562bbb69b8aef2684f3b3403850c2a755ee12e440b0f065376c2d663188 (image=quay.io/ceph/ceph:v19, name=elegant_thompson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:34:58 compute-0 podman[74795]: 2025-12-15 10:34:57.921551066 +0000 UTC m=+0.023837028 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:34:58 compute-0 systemd[1]: Started libpod-conmon-8bbb4562bbb69b8aef2684f3b3403850c2a755ee12e440b0f065376c2d663188.scope.
Dec 15 10:34:58 compute-0 ceph-mgr[74651]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 15 10:34:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:34:58.326+0000 7fd43c760140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 15 10:34:58 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'selftest'
Dec 15 10:34:58 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:34:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9395cfd03338d5ee9ce30d28e1396b29e9cf4f260dc4951e07ebcc5ada46fbe2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9395cfd03338d5ee9ce30d28e1396b29e9cf4f260dc4951e07ebcc5ada46fbe2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9395cfd03338d5ee9ce30d28e1396b29e9cf4f260dc4951e07ebcc5ada46fbe2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:34:58 compute-0 podman[74795]: 2025-12-15 10:34:58.37100848 +0000 UTC m=+0.473294472 container init 8bbb4562bbb69b8aef2684f3b3403850c2a755ee12e440b0f065376c2d663188 (image=quay.io/ceph/ceph:v19, name=elegant_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 15 10:34:58 compute-0 podman[74795]: 2025-12-15 10:34:58.375899046 +0000 UTC m=+0.478184978 container start 8bbb4562bbb69b8aef2684f3b3403850c2a755ee12e440b0f065376c2d663188 (image=quay.io/ceph/ceph:v19, name=elegant_thompson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True)
Dec 15 10:34:58 compute-0 podman[74795]: 2025-12-15 10:34:58.385128109 +0000 UTC m=+0.487414061 container attach 8bbb4562bbb69b8aef2684f3b3403850c2a755ee12e440b0f065376c2d663188 (image=quay.io/ceph/ceph:v19, name=elegant_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:34:58 compute-0 ceph-mgr[74651]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 15 10:34:58 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'snap_schedule'
Dec 15 10:34:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:34:58.409+0000 7fd43c760140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 15 10:34:58 compute-0 ceph-mgr[74651]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 15 10:34:58 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'stats'
Dec 15 10:34:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:34:58.494+0000 7fd43c760140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 15 10:34:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 15 10:34:58 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/709611193' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 15 10:34:58 compute-0 elegant_thompson[74811]: 
Dec 15 10:34:58 compute-0 elegant_thompson[74811]: {
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:     "fsid": "77365f67-614e-5a8d-b658-640395550c79",
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:     "health": {
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "status": "HEALTH_OK",
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "checks": {},
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "mutes": []
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:     },
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:     "election_epoch": 5,
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:     "quorum": [
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         0
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:     ],
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:     "quorum_names": [
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "compute-0"
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:     ],
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:     "quorum_age": 7,
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:     "monmap": {
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "epoch": 1,
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "min_mon_release_name": "squid",
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "num_mons": 1
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:     },
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:     "osdmap": {
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "epoch": 1,
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "num_osds": 0,
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "num_up_osds": 0,
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "osd_up_since": 0,
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "num_in_osds": 0,
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "osd_in_since": 0,
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "num_remapped_pgs": 0
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:     },
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:     "pgmap": {
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "pgs_by_state": [],
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "num_pgs": 0,
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "num_pools": 0,
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "num_objects": 0,
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "data_bytes": 0,
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "bytes_used": 0,
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "bytes_avail": 0,
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "bytes_total": 0
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:     },
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:     "fsmap": {
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "epoch": 1,
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "btime": "2025-12-15T10:34:49:065673+0000",
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "by_rank": [],
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "up:standby": 0
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:     },
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:     "mgrmap": {
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "available": false,
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "num_standbys": 0,
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "modules": [
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:             "iostat",
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:             "nfs",
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:             "restful"
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         ],
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "services": {}
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:     },
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:     "servicemap": {
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "epoch": 1,
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "modified": "2025-12-15T10:34:49.067589+0000",
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:         "services": {}
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:     },
Dec 15 10:34:58 compute-0 elegant_thompson[74811]:     "progress_events": {}
Dec 15 10:34:58 compute-0 elegant_thompson[74811]: }
Dec 15 10:34:58 compute-0 systemd[1]: libpod-8bbb4562bbb69b8aef2684f3b3403850c2a755ee12e440b0f065376c2d663188.scope: Deactivated successfully.
Dec 15 10:34:58 compute-0 podman[74795]: 2025-12-15 10:34:58.582458956 +0000 UTC m=+0.684744888 container died 8bbb4562bbb69b8aef2684f3b3403850c2a755ee12e440b0f065376c2d663188 (image=quay.io/ceph/ceph:v19, name=elegant_thompson, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 15 10:34:58 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'status'
Dec 15 10:34:58 compute-0 ceph-mgr[74651]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 15 10:34:58 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'telegraf'
Dec 15 10:34:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:34:58.672+0000 7fd43c760140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 15 10:34:58 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/709611193' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 15 10:34:58 compute-0 ceph-mgr[74651]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 15 10:34:58 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'telemetry'
Dec 15 10:34:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:34:58.744+0000 7fd43c760140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 15 10:34:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-9395cfd03338d5ee9ce30d28e1396b29e9cf4f260dc4951e07ebcc5ada46fbe2-merged.mount: Deactivated successfully.
Dec 15 10:34:58 compute-0 podman[74795]: 2025-12-15 10:34:58.847860365 +0000 UTC m=+0.950146297 container remove 8bbb4562bbb69b8aef2684f3b3403850c2a755ee12e440b0f065376c2d663188 (image=quay.io/ceph/ceph:v19, name=elegant_thompson, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 15 10:34:58 compute-0 systemd[1]: libpod-conmon-8bbb4562bbb69b8aef2684f3b3403850c2a755ee12e440b0f065376c2d663188.scope: Deactivated successfully.
Dec 15 10:34:58 compute-0 ceph-mgr[74651]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 15 10:34:58 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'test_orchestrator'
Dec 15 10:34:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:34:58.922+0000 7fd43c760140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'volumes'
Dec 15 10:34:59 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:34:59.204+0000 7fd43c760140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'zabbix'
Dec 15 10:34:59 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:34:59.475+0000 7fd43c760140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 15 10:34:59 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:34:59.551+0000 7fd43c760140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: ms_deliver_dispatch: unhandled message 0x55a8b19909c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 15 10:34:59 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.difmqj
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: mgr handle_mgr_map Activating!
Dec 15 10:34:59 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.difmqj(active, starting, since 0.00979621s)
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: mgr handle_mgr_map I am now activating
Dec 15 10:34:59 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 15 10:34:59 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1006220448' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 15 10:34:59 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).mds e1 all = 1
Dec 15 10:34:59 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 15 10:34:59 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1006220448' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 15 10:34:59 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 15 10:34:59 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1006220448' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 15 10:34:59 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 15 10:34:59 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1006220448' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 15 10:34:59 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.difmqj", "id": "compute-0.difmqj"} v 0)
Dec 15 10:34:59 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1006220448' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.difmqj", "id": "compute-0.difmqj"}]: dispatch
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: balancer
Dec 15 10:34:59 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : Manager daemon compute-0.difmqj is now available
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: crash
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [balancer INFO root] Starting
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: devicehealth
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [balancer INFO root] Optimize plan auto_2025-12-15_10:34:59
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [balancer INFO root] do_upmap
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [balancer INFO root] No pools available
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: iostat
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [devicehealth INFO root] Starting
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: nfs
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: orchestrator
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: pg_autoscaler
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: progress
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [progress INFO root] Loading...
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [progress INFO root] No stored events to load
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [progress INFO root] Loaded [] historic events
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [progress INFO root] Loaded OSDMap, ready.
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [rbd_support INFO root] recovery thread starting
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [rbd_support INFO root] starting setup
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: rbd_support
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: restful
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [restful INFO root] server_addr: :: server_port: 8003
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: status
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [restful WARNING root] server not running: no certificate configured
Dec 15 10:34:59 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/mirror_snapshot_schedule"} v 0)
Dec 15 10:34:59 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1006220448' entity='mgr.compute-0.difmqj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/mirror_snapshot_schedule"}]: dispatch
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: telemetry
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 15 10:34:59 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [rbd_support INFO root] PerfHandler: starting
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [rbd_support INFO root] TaskHandler: starting
Dec 15 10:34:59 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/trash_purge_schedule"} v 0)
Dec 15 10:34:59 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1006220448' entity='mgr.compute-0.difmqj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/trash_purge_schedule"}]: dispatch
Dec 15 10:34:59 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1006220448' entity='mgr.compute-0.difmqj' 
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: [rbd_support INFO root] setup complete
Dec 15 10:34:59 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Dec 15 10:34:59 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1006220448' entity='mgr.compute-0.difmqj' 
Dec 15 10:34:59 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Dec 15 10:34:59 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: volumes
Dec 15 10:34:59 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1006220448' entity='mgr.compute-0.difmqj' 
Dec 15 10:34:59 compute-0 ceph-mon[74356]: Activating manager daemon compute-0.difmqj
Dec 15 10:34:59 compute-0 ceph-mon[74356]: mgrmap e2: compute-0.difmqj(active, starting, since 0.00979621s)
Dec 15 10:34:59 compute-0 ceph-mon[74356]: from='mgr.14102 192.168.122.100:0/1006220448' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 15 10:34:59 compute-0 ceph-mon[74356]: from='mgr.14102 192.168.122.100:0/1006220448' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 15 10:34:59 compute-0 ceph-mon[74356]: from='mgr.14102 192.168.122.100:0/1006220448' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 15 10:34:59 compute-0 ceph-mon[74356]: from='mgr.14102 192.168.122.100:0/1006220448' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 15 10:34:59 compute-0 ceph-mon[74356]: from='mgr.14102 192.168.122.100:0/1006220448' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.difmqj", "id": "compute-0.difmqj"}]: dispatch
Dec 15 10:34:59 compute-0 ceph-mon[74356]: Manager daemon compute-0.difmqj is now available
Dec 15 10:34:59 compute-0 ceph-mon[74356]: from='mgr.14102 192.168.122.100:0/1006220448' entity='mgr.compute-0.difmqj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/mirror_snapshot_schedule"}]: dispatch
Dec 15 10:34:59 compute-0 ceph-mon[74356]: from='mgr.14102 192.168.122.100:0/1006220448' entity='mgr.compute-0.difmqj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/trash_purge_schedule"}]: dispatch
Dec 15 10:34:59 compute-0 ceph-mon[74356]: from='mgr.14102 192.168.122.100:0/1006220448' entity='mgr.compute-0.difmqj' 
Dec 15 10:34:59 compute-0 ceph-mon[74356]: from='mgr.14102 192.168.122.100:0/1006220448' entity='mgr.compute-0.difmqj' 
Dec 15 10:34:59 compute-0 ceph-mon[74356]: from='mgr.14102 192.168.122.100:0/1006220448' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:00 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.difmqj(active, since 1.0203s)
Dec 15 10:35:00 compute-0 podman[74929]: 2025-12-15 10:35:00.915654366 +0000 UTC m=+0.044174984 container create 6f84a80a81c7457347f9ed6da0850543ac1b1fe61df1d32c0577120cead7f5e9 (image=quay.io/ceph/ceph:v19, name=angry_buck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 15 10:35:00 compute-0 systemd[1]: Started libpod-conmon-6f84a80a81c7457347f9ed6da0850543ac1b1fe61df1d32c0577120cead7f5e9.scope.
Dec 15 10:35:00 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bc587013eb2000ee881e86019c350aa411d6114448985da5eea2a0a54362fc4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bc587013eb2000ee881e86019c350aa411d6114448985da5eea2a0a54362fc4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bc587013eb2000ee881e86019c350aa411d6114448985da5eea2a0a54362fc4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:00 compute-0 podman[74929]: 2025-12-15 10:35:00.892362537 +0000 UTC m=+0.020883175 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:00 compute-0 podman[74929]: 2025-12-15 10:35:00.993386556 +0000 UTC m=+0.121907204 container init 6f84a80a81c7457347f9ed6da0850543ac1b1fe61df1d32c0577120cead7f5e9 (image=quay.io/ceph/ceph:v19, name=angry_buck, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:35:00 compute-0 podman[74929]: 2025-12-15 10:35:00.998480737 +0000 UTC m=+0.127001345 container start 6f84a80a81c7457347f9ed6da0850543ac1b1fe61df1d32c0577120cead7f5e9 (image=quay.io/ceph/ceph:v19, name=angry_buck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:35:01 compute-0 podman[74929]: 2025-12-15 10:35:01.002415221 +0000 UTC m=+0.130935869 container attach 6f84a80a81c7457347f9ed6da0850543ac1b1fe61df1d32c0577120cead7f5e9 (image=quay.io/ceph/ceph:v19, name=angry_buck, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 15 10:35:01 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 15 10:35:01 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/105396903' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 15 10:35:01 compute-0 angry_buck[74945]: 
Dec 15 10:35:01 compute-0 angry_buck[74945]: {
Dec 15 10:35:01 compute-0 angry_buck[74945]:     "fsid": "77365f67-614e-5a8d-b658-640395550c79",
Dec 15 10:35:01 compute-0 angry_buck[74945]:     "health": {
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "status": "HEALTH_OK",
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "checks": {},
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "mutes": []
Dec 15 10:35:01 compute-0 angry_buck[74945]:     },
Dec 15 10:35:01 compute-0 angry_buck[74945]:     "election_epoch": 5,
Dec 15 10:35:01 compute-0 angry_buck[74945]:     "quorum": [
Dec 15 10:35:01 compute-0 angry_buck[74945]:         0
Dec 15 10:35:01 compute-0 angry_buck[74945]:     ],
Dec 15 10:35:01 compute-0 angry_buck[74945]:     "quorum_names": [
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "compute-0"
Dec 15 10:35:01 compute-0 angry_buck[74945]:     ],
Dec 15 10:35:01 compute-0 angry_buck[74945]:     "quorum_age": 10,
Dec 15 10:35:01 compute-0 angry_buck[74945]:     "monmap": {
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "epoch": 1,
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "min_mon_release_name": "squid",
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "num_mons": 1
Dec 15 10:35:01 compute-0 angry_buck[74945]:     },
Dec 15 10:35:01 compute-0 angry_buck[74945]:     "osdmap": {
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "epoch": 1,
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "num_osds": 0,
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "num_up_osds": 0,
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "osd_up_since": 0,
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "num_in_osds": 0,
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "osd_in_since": 0,
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "num_remapped_pgs": 0
Dec 15 10:35:01 compute-0 angry_buck[74945]:     },
Dec 15 10:35:01 compute-0 angry_buck[74945]:     "pgmap": {
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "pgs_by_state": [],
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "num_pgs": 0,
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "num_pools": 0,
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "num_objects": 0,
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "data_bytes": 0,
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "bytes_used": 0,
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "bytes_avail": 0,
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "bytes_total": 0
Dec 15 10:35:01 compute-0 angry_buck[74945]:     },
Dec 15 10:35:01 compute-0 angry_buck[74945]:     "fsmap": {
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "epoch": 1,
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "btime": "2025-12-15T10:34:49:065673+0000",
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "by_rank": [],
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "up:standby": 0
Dec 15 10:35:01 compute-0 angry_buck[74945]:     },
Dec 15 10:35:01 compute-0 angry_buck[74945]:     "mgrmap": {
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "available": true,
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "num_standbys": 0,
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "modules": [
Dec 15 10:35:01 compute-0 angry_buck[74945]:             "iostat",
Dec 15 10:35:01 compute-0 angry_buck[74945]:             "nfs",
Dec 15 10:35:01 compute-0 angry_buck[74945]:             "restful"
Dec 15 10:35:01 compute-0 angry_buck[74945]:         ],
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "services": {}
Dec 15 10:35:01 compute-0 angry_buck[74945]:     },
Dec 15 10:35:01 compute-0 angry_buck[74945]:     "servicemap": {
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "epoch": 1,
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "modified": "2025-12-15T10:34:49.067589+0000",
Dec 15 10:35:01 compute-0 angry_buck[74945]:         "services": {}
Dec 15 10:35:01 compute-0 angry_buck[74945]:     },
Dec 15 10:35:01 compute-0 angry_buck[74945]:     "progress_events": {}
Dec 15 10:35:01 compute-0 angry_buck[74945]: }
Dec 15 10:35:01 compute-0 systemd[1]: libpod-6f84a80a81c7457347f9ed6da0850543ac1b1fe61df1d32c0577120cead7f5e9.scope: Deactivated successfully.
Dec 15 10:35:01 compute-0 conmon[74945]: conmon 6f84a80a81c7457347f9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6f84a80a81c7457347f9ed6da0850543ac1b1fe61df1d32c0577120cead7f5e9.scope/container/memory.events
Dec 15 10:35:01 compute-0 podman[74929]: 2025-12-15 10:35:01.460310635 +0000 UTC m=+0.588831273 container died 6f84a80a81c7457347f9ed6da0850543ac1b1fe61df1d32c0577120cead7f5e9 (image=quay.io/ceph/ceph:v19, name=angry_buck, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:35:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bc587013eb2000ee881e86019c350aa411d6114448985da5eea2a0a54362fc4-merged.mount: Deactivated successfully.
Dec 15 10:35:01 compute-0 podman[74929]: 2025-12-15 10:35:01.531467775 +0000 UTC m=+0.659988393 container remove 6f84a80a81c7457347f9ed6da0850543ac1b1fe61df1d32c0577120cead7f5e9 (image=quay.io/ceph/ceph:v19, name=angry_buck, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:35:01 compute-0 systemd[1]: libpod-conmon-6f84a80a81c7457347f9ed6da0850543ac1b1fe61df1d32c0577120cead7f5e9.scope: Deactivated successfully.
Dec 15 10:35:01 compute-0 ceph-mgr[74651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 15 10:35:01 compute-0 ceph-mon[74356]: mgrmap e3: compute-0.difmqj(active, since 1.0203s)
Dec 15 10:35:01 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/105396903' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 15 10:35:01 compute-0 podman[74984]: 2025-12-15 10:35:01.599709082 +0000 UTC m=+0.046170917 container create b2780a8abbfd7a0660fee72097e292dcd62b586aada3e5571b25117e0656b902 (image=quay.io/ceph/ceph:v19, name=keen_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:35:01 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.difmqj(active, since 2s)
Dec 15 10:35:01 compute-0 systemd[1]: Started libpod-conmon-b2780a8abbfd7a0660fee72097e292dcd62b586aada3e5571b25117e0656b902.scope.
Dec 15 10:35:01 compute-0 podman[74984]: 2025-12-15 10:35:01.57570076 +0000 UTC m=+0.022162625 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:01 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29513cd1168dd5b6448d9a38c9e44824aa37becca5632da0f98c3f3381334953/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29513cd1168dd5b6448d9a38c9e44824aa37becca5632da0f98c3f3381334953/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29513cd1168dd5b6448d9a38c9e44824aa37becca5632da0f98c3f3381334953/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29513cd1168dd5b6448d9a38c9e44824aa37becca5632da0f98c3f3381334953/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:01 compute-0 podman[74984]: 2025-12-15 10:35:01.701484125 +0000 UTC m=+0.147945980 container init b2780a8abbfd7a0660fee72097e292dcd62b586aada3e5571b25117e0656b902 (image=quay.io/ceph/ceph:v19, name=keen_keldysh, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 15 10:35:01 compute-0 podman[74984]: 2025-12-15 10:35:01.707695022 +0000 UTC m=+0.154156857 container start b2780a8abbfd7a0660fee72097e292dcd62b586aada3e5571b25117e0656b902 (image=quay.io/ceph/ceph:v19, name=keen_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 15 10:35:01 compute-0 podman[74984]: 2025-12-15 10:35:01.711205063 +0000 UTC m=+0.157666918 container attach b2780a8abbfd7a0660fee72097e292dcd62b586aada3e5571b25117e0656b902 (image=quay.io/ceph/ceph:v19, name=keen_keldysh, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 15 10:35:02 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 15 10:35:02 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1464648263' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 15 10:35:02 compute-0 keen_keldysh[75000]: 
Dec 15 10:35:02 compute-0 keen_keldysh[75000]: [global]
Dec 15 10:35:02 compute-0 keen_keldysh[75000]:         fsid = 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:35:02 compute-0 keen_keldysh[75000]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec 15 10:35:02 compute-0 systemd[1]: libpod-b2780a8abbfd7a0660fee72097e292dcd62b586aada3e5571b25117e0656b902.scope: Deactivated successfully.
Dec 15 10:35:02 compute-0 podman[74984]: 2025-12-15 10:35:02.053524085 +0000 UTC m=+0.499985960 container died b2780a8abbfd7a0660fee72097e292dcd62b586aada3e5571b25117e0656b902 (image=quay.io/ceph/ceph:v19, name=keen_keldysh, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:35:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-29513cd1168dd5b6448d9a38c9e44824aa37becca5632da0f98c3f3381334953-merged.mount: Deactivated successfully.
Dec 15 10:35:02 compute-0 podman[74984]: 2025-12-15 10:35:02.095278701 +0000 UTC m=+0.541740536 container remove b2780a8abbfd7a0660fee72097e292dcd62b586aada3e5571b25117e0656b902 (image=quay.io/ceph/ceph:v19, name=keen_keldysh, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 15 10:35:02 compute-0 systemd[1]: libpod-conmon-b2780a8abbfd7a0660fee72097e292dcd62b586aada3e5571b25117e0656b902.scope: Deactivated successfully.
Dec 15 10:35:02 compute-0 podman[75039]: 2025-12-15 10:35:02.157841718 +0000 UTC m=+0.041362585 container create 09890bf70564d1334f43971107a751547ab88ffe8d92568ac46990955920da48 (image=quay.io/ceph/ceph:v19, name=nostalgic_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 15 10:35:02 compute-0 systemd[1]: Started libpod-conmon-09890bf70564d1334f43971107a751547ab88ffe8d92568ac46990955920da48.scope.
Dec 15 10:35:02 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a48c2ff5d7b41e3efa1d41d914fdfe903965d23f73d7b472c92cca7c0beb563/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a48c2ff5d7b41e3efa1d41d914fdfe903965d23f73d7b472c92cca7c0beb563/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a48c2ff5d7b41e3efa1d41d914fdfe903965d23f73d7b472c92cca7c0beb563/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:02 compute-0 podman[75039]: 2025-12-15 10:35:02.234584225 +0000 UTC m=+0.118105122 container init 09890bf70564d1334f43971107a751547ab88ffe8d92568ac46990955920da48 (image=quay.io/ceph/ceph:v19, name=nostalgic_brattain, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 15 10:35:02 compute-0 podman[75039]: 2025-12-15 10:35:02.140947432 +0000 UTC m=+0.024468309 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:02 compute-0 podman[75039]: 2025-12-15 10:35:02.24010544 +0000 UTC m=+0.123626307 container start 09890bf70564d1334f43971107a751547ab88ffe8d92568ac46990955920da48 (image=quay.io/ceph/ceph:v19, name=nostalgic_brattain, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:35:02 compute-0 podman[75039]: 2025-12-15 10:35:02.244027686 +0000 UTC m=+0.127548573 container attach 09890bf70564d1334f43971107a751547ab88ffe8d92568ac46990955920da48 (image=quay.io/ceph/ceph:v19, name=nostalgic_brattain, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 15 10:35:02 compute-0 ceph-mon[74356]: mgrmap e4: compute-0.difmqj(active, since 2s)
Dec 15 10:35:02 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/1464648263' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 15 10:35:02 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Dec 15 10:35:02 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1068118916' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec 15 10:35:03 compute-0 ceph-mgr[74651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 15 10:35:03 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/1068118916' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec 15 10:35:03 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1068118916' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 15 10:35:03 compute-0 ceph-mgr[74651]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 15 10:35:03 compute-0 ceph-mgr[74651]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 15 10:35:03 compute-0 ceph-mgr[74651]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 15 10:35:03 compute-0 ceph-mgr[74651]: mgr respawn  1: '-n'
Dec 15 10:35:03 compute-0 ceph-mgr[74651]: mgr respawn  2: 'mgr.compute-0.difmqj'
Dec 15 10:35:03 compute-0 ceph-mgr[74651]: mgr respawn  3: '-f'
Dec 15 10:35:03 compute-0 ceph-mgr[74651]: mgr respawn  4: '--setuser'
Dec 15 10:35:03 compute-0 ceph-mgr[74651]: mgr respawn  5: 'ceph'
Dec 15 10:35:03 compute-0 ceph-mgr[74651]: mgr respawn  6: '--setgroup'
Dec 15 10:35:03 compute-0 ceph-mgr[74651]: mgr respawn  7: 'ceph'
Dec 15 10:35:03 compute-0 ceph-mgr[74651]: mgr respawn  8: '--default-log-to-file=false'
Dec 15 10:35:03 compute-0 ceph-mgr[74651]: mgr respawn  9: '--default-log-to-journald=true'
Dec 15 10:35:03 compute-0 ceph-mgr[74651]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 15 10:35:03 compute-0 ceph-mgr[74651]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 15 10:35:03 compute-0 ceph-mgr[74651]: mgr respawn  exe_path /proc/self/exe
Dec 15 10:35:03 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.difmqj(active, since 4s)
Dec 15 10:35:03 compute-0 systemd[1]: libpod-09890bf70564d1334f43971107a751547ab88ffe8d92568ac46990955920da48.scope: Deactivated successfully.
Dec 15 10:35:03 compute-0 podman[75039]: 2025-12-15 10:35:03.668567308 +0000 UTC m=+1.552088175 container died 09890bf70564d1334f43971107a751547ab88ffe8d92568ac46990955920da48 (image=quay.io/ceph/ceph:v19, name=nostalgic_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 15 10:35:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a48c2ff5d7b41e3efa1d41d914fdfe903965d23f73d7b472c92cca7c0beb563-merged.mount: Deactivated successfully.
Dec 15 10:35:03 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ignoring --setuser ceph since I am not root
Dec 15 10:35:03 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ignoring --setgroup ceph since I am not root
Dec 15 10:35:03 compute-0 podman[75039]: 2025-12-15 10:35:03.740773551 +0000 UTC m=+1.624294418 container remove 09890bf70564d1334f43971107a751547ab88ffe8d92568ac46990955920da48 (image=quay.io/ceph/ceph:v19, name=nostalgic_brattain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:35:03 compute-0 ceph-mgr[74651]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 15 10:35:03 compute-0 ceph-mgr[74651]: pidfile_write: ignore empty --pid-file
Dec 15 10:35:03 compute-0 systemd[1]: libpod-conmon-09890bf70564d1334f43971107a751547ab88ffe8d92568ac46990955920da48.scope: Deactivated successfully.
Dec 15 10:35:03 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'alerts'
Dec 15 10:35:03 compute-0 podman[75102]: 2025-12-15 10:35:03.799564819 +0000 UTC m=+0.041356035 container create dc73e411bfe3afc44ba09fe64bbe3e3d88b78e95410e54df445f253ff39773e3 (image=quay.io/ceph/ceph:v19, name=peaceful_lederberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 15 10:35:03 compute-0 systemd[1]: Started libpod-conmon-dc73e411bfe3afc44ba09fe64bbe3e3d88b78e95410e54df445f253ff39773e3.scope.
Dec 15 10:35:03 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a61ac64fa1df9d2dcf026dc920ad1c8322f18838086b5f6f804e94a9dfdf78e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a61ac64fa1df9d2dcf026dc920ad1c8322f18838086b5f6f804e94a9dfdf78e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a61ac64fa1df9d2dcf026dc920ad1c8322f18838086b5f6f804e94a9dfdf78e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:03 compute-0 podman[75102]: 2025-12-15 10:35:03.854617558 +0000 UTC m=+0.096408774 container init dc73e411bfe3afc44ba09fe64bbe3e3d88b78e95410e54df445f253ff39773e3 (image=quay.io/ceph/ceph:v19, name=peaceful_lederberg, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 15 10:35:03 compute-0 podman[75102]: 2025-12-15 10:35:03.859458441 +0000 UTC m=+0.101249657 container start dc73e411bfe3afc44ba09fe64bbe3e3d88b78e95410e54df445f253ff39773e3 (image=quay.io/ceph/ceph:v19, name=peaceful_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 15 10:35:03 compute-0 podman[75102]: 2025-12-15 10:35:03.862483237 +0000 UTC m=+0.104274453 container attach dc73e411bfe3afc44ba09fe64bbe3e3d88b78e95410e54df445f253ff39773e3 (image=quay.io/ceph/ceph:v19, name=peaceful_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 15 10:35:03 compute-0 podman[75102]: 2025-12-15 10:35:03.782504027 +0000 UTC m=+0.024295273 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:03 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:35:03.903+0000 7fca5e3b0140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 15 10:35:03 compute-0 ceph-mgr[74651]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 15 10:35:03 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'balancer'
Dec 15 10:35:03 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:35:03.989+0000 7fca5e3b0140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 15 10:35:03 compute-0 ceph-mgr[74651]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 15 10:35:03 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'cephadm'
Dec 15 10:35:04 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 15 10:35:04 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1322545054' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 15 10:35:04 compute-0 peaceful_lederberg[75131]: {
Dec 15 10:35:04 compute-0 peaceful_lederberg[75131]:     "epoch": 5,
Dec 15 10:35:04 compute-0 peaceful_lederberg[75131]:     "available": true,
Dec 15 10:35:04 compute-0 peaceful_lederberg[75131]:     "active_name": "compute-0.difmqj",
Dec 15 10:35:04 compute-0 peaceful_lederberg[75131]:     "num_standby": 0
Dec 15 10:35:04 compute-0 peaceful_lederberg[75131]: }
Dec 15 10:35:04 compute-0 systemd[1]: libpod-dc73e411bfe3afc44ba09fe64bbe3e3d88b78e95410e54df445f253ff39773e3.scope: Deactivated successfully.
Dec 15 10:35:04 compute-0 podman[75102]: 2025-12-15 10:35:04.269865955 +0000 UTC m=+0.511657201 container died dc73e411bfe3afc44ba09fe64bbe3e3d88b78e95410e54df445f253ff39773e3 (image=quay.io/ceph/ceph:v19, name=peaceful_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid)
Dec 15 10:35:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a61ac64fa1df9d2dcf026dc920ad1c8322f18838086b5f6f804e94a9dfdf78e-merged.mount: Deactivated successfully.
Dec 15 10:35:04 compute-0 podman[75102]: 2025-12-15 10:35:04.313214612 +0000 UTC m=+0.555005828 container remove dc73e411bfe3afc44ba09fe64bbe3e3d88b78e95410e54df445f253ff39773e3 (image=quay.io/ceph/ceph:v19, name=peaceful_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:35:04 compute-0 systemd[1]: libpod-conmon-dc73e411bfe3afc44ba09fe64bbe3e3d88b78e95410e54df445f253ff39773e3.scope: Deactivated successfully.
Dec 15 10:35:04 compute-0 podman[75167]: 2025-12-15 10:35:04.373279249 +0000 UTC m=+0.039667869 container create 19e79c506ad948245f3ab0b1f9ee91203e6a41b080c7c48e21ac4bd7f2628b74 (image=quay.io/ceph/ceph:v19, name=agitated_morse, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 15 10:35:04 compute-0 systemd[1]: Started libpod-conmon-19e79c506ad948245f3ab0b1f9ee91203e6a41b080c7c48e21ac4bd7f2628b74.scope.
Dec 15 10:35:04 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e3e83c03accb7e639645c399bc12ae3e9f885d8f0d22b8847b88fad32a3c6dc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e3e83c03accb7e639645c399bc12ae3e9f885d8f0d22b8847b88fad32a3c6dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e3e83c03accb7e639645c399bc12ae3e9f885d8f0d22b8847b88fad32a3c6dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:04 compute-0 podman[75167]: 2025-12-15 10:35:04.356423154 +0000 UTC m=+0.022811794 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:04 compute-0 podman[75167]: 2025-12-15 10:35:04.456755831 +0000 UTC m=+0.123144451 container init 19e79c506ad948245f3ab0b1f9ee91203e6a41b080c7c48e21ac4bd7f2628b74 (image=quay.io/ceph/ceph:v19, name=agitated_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:35:04 compute-0 podman[75167]: 2025-12-15 10:35:04.46304514 +0000 UTC m=+0.129433760 container start 19e79c506ad948245f3ab0b1f9ee91203e6a41b080c7c48e21ac4bd7f2628b74 (image=quay.io/ceph/ceph:v19, name=agitated_morse, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:35:04 compute-0 podman[75167]: 2025-12-15 10:35:04.466915064 +0000 UTC m=+0.133303684 container attach 19e79c506ad948245f3ab0b1f9ee91203e6a41b080c7c48e21ac4bd7f2628b74 (image=quay.io/ceph/ceph:v19, name=agitated_morse, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 15 10:35:04 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/1068118916' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 15 10:35:04 compute-0 ceph-mon[74356]: mgrmap e5: compute-0.difmqj(active, since 4s)
Dec 15 10:35:04 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/1322545054' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 15 10:35:04 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'crash'
Dec 15 10:35:04 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:35:04.855+0000 7fca5e3b0140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 15 10:35:04 compute-0 ceph-mgr[74651]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 15 10:35:04 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'dashboard'
Dec 15 10:35:05 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'devicehealth'
Dec 15 10:35:05 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:35:05.523+0000 7fca5e3b0140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 15 10:35:05 compute-0 ceph-mgr[74651]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 15 10:35:05 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'diskprediction_local'
Dec 15 10:35:05 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 15 10:35:05 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 15 10:35:05 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]:   from numpy import show_config as show_numpy_config
Dec 15 10:35:05 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:35:05.700+0000 7fca5e3b0140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 15 10:35:05 compute-0 ceph-mgr[74651]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 15 10:35:05 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'influx'
Dec 15 10:35:05 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:35:05.777+0000 7fca5e3b0140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 15 10:35:05 compute-0 ceph-mgr[74651]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 15 10:35:05 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'insights'
Dec 15 10:35:05 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'iostat'
Dec 15 10:35:05 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:35:05.928+0000 7fca5e3b0140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 15 10:35:05 compute-0 ceph-mgr[74651]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 15 10:35:05 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'k8sevents'
Dec 15 10:35:06 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'localpool'
Dec 15 10:35:06 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'mds_autoscaler'
Dec 15 10:35:06 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'mirroring'
Dec 15 10:35:06 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'nfs'
Dec 15 10:35:07 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:35:07.030+0000 7fca5e3b0140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 15 10:35:07 compute-0 ceph-mgr[74651]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 15 10:35:07 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'orchestrator'
Dec 15 10:35:07 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:35:07.259+0000 7fca5e3b0140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 15 10:35:07 compute-0 ceph-mgr[74651]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 15 10:35:07 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'osd_perf_query'
Dec 15 10:35:07 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:35:07.341+0000 7fca5e3b0140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 15 10:35:07 compute-0 ceph-mgr[74651]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 15 10:35:07 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'osd_support'
Dec 15 10:35:07 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:35:07.416+0000 7fca5e3b0140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 15 10:35:07 compute-0 ceph-mgr[74651]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 15 10:35:07 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'pg_autoscaler'
Dec 15 10:35:07 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:35:07.503+0000 7fca5e3b0140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 15 10:35:07 compute-0 ceph-mgr[74651]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 15 10:35:07 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'progress'
Dec 15 10:35:07 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:35:07.583+0000 7fca5e3b0140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 15 10:35:07 compute-0 ceph-mgr[74651]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 15 10:35:07 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'prometheus'
Dec 15 10:35:07 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:35:07.962+0000 7fca5e3b0140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 15 10:35:07 compute-0 ceph-mgr[74651]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 15 10:35:07 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'rbd_support'
Dec 15 10:35:08 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:35:08.067+0000 7fca5e3b0140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 15 10:35:08 compute-0 ceph-mgr[74651]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 15 10:35:08 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'restful'
Dec 15 10:35:08 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'rgw'
Dec 15 10:35:08 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:35:08.522+0000 7fca5e3b0140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 15 10:35:08 compute-0 ceph-mgr[74651]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 15 10:35:08 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'rook'
Dec 15 10:35:09 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:35:09.133+0000 7fca5e3b0140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 15 10:35:09 compute-0 ceph-mgr[74651]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 15 10:35:09 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'selftest'
Dec 15 10:35:09 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:35:09.214+0000 7fca5e3b0140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 15 10:35:09 compute-0 ceph-mgr[74651]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 15 10:35:09 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'snap_schedule'
Dec 15 10:35:09 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:35:09.310+0000 7fca5e3b0140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 15 10:35:09 compute-0 ceph-mgr[74651]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 15 10:35:09 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'stats'
Dec 15 10:35:09 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'status'
Dec 15 10:35:09 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:35:09.477+0000 7fca5e3b0140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 15 10:35:09 compute-0 ceph-mgr[74651]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 15 10:35:09 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'telegraf'
Dec 15 10:35:09 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:35:09.555+0000 7fca5e3b0140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 15 10:35:09 compute-0 ceph-mgr[74651]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 15 10:35:09 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'telemetry'
Dec 15 10:35:09 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:35:09.723+0000 7fca5e3b0140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 15 10:35:09 compute-0 ceph-mgr[74651]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 15 10:35:09 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'test_orchestrator'
Dec 15 10:35:09 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:35:09.968+0000 7fca5e3b0140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 15 10:35:09 compute-0 ceph-mgr[74651]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 15 10:35:09 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'volumes'
Dec 15 10:35:10 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:35:10.260+0000 7fca5e3b0140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'zabbix'
Dec 15 10:35:10 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:35:10.328+0000 7fca5e3b0140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 15 10:35:10 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : Active manager daemon compute-0.difmqj restarted
Dec 15 10:35:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Dec 15 10:35:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: ms_deliver_dispatch: unhandled message 0x55f10daf6d00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 15 10:35:10 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.difmqj
Dec 15 10:35:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 15 10:35:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 15 10:35:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: mgr handle_mgr_map Activating!
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: mgr handle_mgr_map I am now activating
Dec 15 10:35:10 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Dec 15 10:35:10 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.difmqj(active, starting, since 0.290759s)
Dec 15 10:35:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 15 10:35:10 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 15 10:35:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.difmqj", "id": "compute-0.difmqj"} v 0)
Dec 15 10:35:10 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.difmqj", "id": "compute-0.difmqj"}]: dispatch
Dec 15 10:35:10 compute-0 ceph-mon[74356]: Active manager daemon compute-0.difmqj restarted
Dec 15 10:35:10 compute-0 ceph-mon[74356]: Activating manager daemon compute-0.difmqj
Dec 15 10:35:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 15 10:35:10 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 15 10:35:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).mds e1 all = 1
Dec 15 10:35:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 15 10:35:10 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 15 10:35:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 15 10:35:10 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: balancer
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [balancer INFO root] Starting
Dec 15 10:35:10 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : Manager daemon compute-0.difmqj is now available
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [balancer INFO root] Optimize plan auto_2025-12-15_10:35:10
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [balancer INFO root] do_upmap
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [balancer INFO root] No pools available
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Dec 15 10:35:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Dec 15 10:35:10 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Dec 15 10:35:10 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: cephadm
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: crash
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: devicehealth
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: iostat
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: nfs
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [devicehealth INFO root] Starting
Dec 15 10:35:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 15 10:35:10 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: orchestrator
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: pg_autoscaler
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: progress
Dec 15 10:35:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 15 10:35:10 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [progress INFO root] Loading...
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [progress INFO root] No stored events to load
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [progress INFO root] Loaded [] historic events
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [progress INFO root] Loaded OSDMap, ready.
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [rbd_support INFO root] recovery thread starting
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [rbd_support INFO root] starting setup
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: rbd_support
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: restful
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: status
Dec 15 10:35:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/mirror_snapshot_schedule"} v 0)
Dec 15 10:35:10 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/mirror_snapshot_schedule"}]: dispatch
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [restful INFO root] server_addr: :: server_port: 8003
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [restful WARNING root] server not running: no certificate configured
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: telemetry
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [rbd_support INFO root] PerfHandler: starting
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [rbd_support INFO root] TaskHandler: starting
Dec 15 10:35:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/trash_purge_schedule"} v 0)
Dec 15 10:35:10 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/trash_purge_schedule"}]: dispatch
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: [rbd_support INFO root] setup complete
Dec 15 10:35:10 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: volumes
Dec 15 10:35:11 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019931191 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:35:11 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Dec 15 10:35:11 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:11 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Dec 15 10:35:11 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:11 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec 15 10:35:11 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.difmqj(active, since 1.31411s)
Dec 15 10:35:11 compute-0 ceph-mon[74356]: osdmap e2: 0 total, 0 up, 0 in
Dec 15 10:35:11 compute-0 ceph-mon[74356]: mgrmap e6: compute-0.difmqj(active, starting, since 0.290759s)
Dec 15 10:35:11 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 15 10:35:11 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.difmqj", "id": "compute-0.difmqj"}]: dispatch
Dec 15 10:35:11 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 15 10:35:11 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 15 10:35:11 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 15 10:35:11 compute-0 ceph-mon[74356]: Manager daemon compute-0.difmqj is now available
Dec 15 10:35:11 compute-0 ceph-mon[74356]: Found migration_current of "None". Setting to last migration.
Dec 15 10:35:11 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:11 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:11 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 15 10:35:11 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 15 10:35:11 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/mirror_snapshot_schedule"}]: dispatch
Dec 15 10:35:11 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/trash_purge_schedule"}]: dispatch
Dec 15 10:35:11 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:11 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:11 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 15 10:35:11 compute-0 agitated_morse[75192]: {
Dec 15 10:35:11 compute-0 agitated_morse[75192]:     "mgrmap_epoch": 7,
Dec 15 10:35:11 compute-0 agitated_morse[75192]:     "initialized": true
Dec 15 10:35:11 compute-0 agitated_morse[75192]: }
Dec 15 10:35:11 compute-0 systemd[1]: libpod-19e79c506ad948245f3ab0b1f9ee91203e6a41b080c7c48e21ac4bd7f2628b74.scope: Deactivated successfully.
Dec 15 10:35:11 compute-0 podman[75167]: 2025-12-15 10:35:11.673495514 +0000 UTC m=+7.339884144 container died 19e79c506ad948245f3ab0b1f9ee91203e6a41b080c7c48e21ac4bd7f2628b74 (image=quay.io/ceph/ceph:v19, name=agitated_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:35:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e3e83c03accb7e639645c399bc12ae3e9f885d8f0d22b8847b88fad32a3c6dc-merged.mount: Deactivated successfully.
Dec 15 10:35:11 compute-0 podman[75167]: 2025-12-15 10:35:11.721907165 +0000 UTC m=+7.388295785 container remove 19e79c506ad948245f3ab0b1f9ee91203e6a41b080c7c48e21ac4bd7f2628b74 (image=quay.io/ceph/ceph:v19, name=agitated_morse, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:35:11 compute-0 systemd[1]: libpod-conmon-19e79c506ad948245f3ab0b1f9ee91203e6a41b080c7c48e21ac4bd7f2628b74.scope: Deactivated successfully.
Dec 15 10:35:11 compute-0 podman[75343]: 2025-12-15 10:35:11.781877494 +0000 UTC m=+0.041046193 container create ff086fb415abd25b5653349931ea753849e8cbd11bbd748cdb9c5827540c3516 (image=quay.io/ceph/ceph:v19, name=nifty_lewin, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 15 10:35:11 compute-0 systemd[1]: Started libpod-conmon-ff086fb415abd25b5653349931ea753849e8cbd11bbd748cdb9c5827540c3516.scope.
Dec 15 10:35:11 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/843fa871a79a0eba6e381a2f8b8d75e0fba09317f876bbdaf40d7f3ba9382d35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/843fa871a79a0eba6e381a2f8b8d75e0fba09317f876bbdaf40d7f3ba9382d35/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/843fa871a79a0eba6e381a2f8b8d75e0fba09317f876bbdaf40d7f3ba9382d35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:11 compute-0 podman[75343]: 2025-12-15 10:35:11.853510265 +0000 UTC m=+0.112678984 container init ff086fb415abd25b5653349931ea753849e8cbd11bbd748cdb9c5827540c3516 (image=quay.io/ceph/ceph:v19, name=nifty_lewin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1)
Dec 15 10:35:11 compute-0 podman[75343]: 2025-12-15 10:35:11.762988299 +0000 UTC m=+0.022157028 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:11 compute-0 podman[75343]: 2025-12-15 10:35:11.859612784 +0000 UTC m=+0.118781483 container start ff086fb415abd25b5653349931ea753849e8cbd11bbd748cdb9c5827540c3516 (image=quay.io/ceph/ceph:v19, name=nifty_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 15 10:35:11 compute-0 podman[75343]: 2025-12-15 10:35:11.863393782 +0000 UTC m=+0.122562501 container attach ff086fb415abd25b5653349931ea753849e8cbd11bbd748cdb9c5827540c3516 (image=quay.io/ceph/ceph:v19, name=nifty_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 15 10:35:12 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:35:12 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Dec 15 10:35:12 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:12 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 15 10:35:12 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 15 10:35:12 compute-0 systemd[1]: libpod-ff086fb415abd25b5653349931ea753849e8cbd11bbd748cdb9c5827540c3516.scope: Deactivated successfully.
Dec 15 10:35:12 compute-0 podman[75343]: 2025-12-15 10:35:12.224660573 +0000 UTC m=+0.483829272 container died ff086fb415abd25b5653349931ea753849e8cbd11bbd748cdb9c5827540c3516 (image=quay.io/ceph/ceph:v19, name=nifty_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:35:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-843fa871a79a0eba6e381a2f8b8d75e0fba09317f876bbdaf40d7f3ba9382d35-merged.mount: Deactivated successfully.
Dec 15 10:35:12 compute-0 podman[75343]: 2025-12-15 10:35:12.264510379 +0000 UTC m=+0.523679078 container remove ff086fb415abd25b5653349931ea753849e8cbd11bbd748cdb9c5827540c3516 (image=quay.io/ceph/ceph:v19, name=nifty_lewin, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:35:12 compute-0 systemd[1]: libpod-conmon-ff086fb415abd25b5653349931ea753849e8cbd11bbd748cdb9c5827540c3516.scope: Deactivated successfully.
Dec 15 10:35:12 compute-0 podman[75398]: 2025-12-15 10:35:12.336358306 +0000 UTC m=+0.047232255 container create 3231b5e7835d7fd631dfb9bdcc5bb5967af74929aab3a966176fc09031a90617 (image=quay.io/ceph/ceph:v19, name=funny_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:35:12 compute-0 systemd[1]: Started libpod-conmon-3231b5e7835d7fd631dfb9bdcc5bb5967af74929aab3a966176fc09031a90617.scope.
Dec 15 10:35:12 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f062040b2dd6f9c0aa712f1a02b551dd741c2655ffb45e7ad4c695f967aae85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f062040b2dd6f9c0aa712f1a02b551dd741c2655ffb45e7ad4c695f967aae85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f062040b2dd6f9c0aa712f1a02b551dd741c2655ffb45e7ad4c695f967aae85/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:12 compute-0 podman[75398]: 2025-12-15 10:35:12.403506348 +0000 UTC m=+0.114380277 container init 3231b5e7835d7fd631dfb9bdcc5bb5967af74929aab3a966176fc09031a90617 (image=quay.io/ceph/ceph:v19, name=funny_robinson, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:35:12 compute-0 podman[75398]: 2025-12-15 10:35:12.409082391 +0000 UTC m=+0.119956320 container start 3231b5e7835d7fd631dfb9bdcc5bb5967af74929aab3a966176fc09031a90617 (image=quay.io/ceph/ceph:v19, name=funny_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:35:12 compute-0 podman[75398]: 2025-12-15 10:35:12.314567271 +0000 UTC m=+0.025441220 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:12 compute-0 podman[75398]: 2025-12-15 10:35:12.415016045 +0000 UTC m=+0.125890004 container attach 3231b5e7835d7fd631dfb9bdcc5bb5967af74929aab3a966176fc09031a90617 (image=quay.io/ceph/ceph:v19, name=funny_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:35:12 compute-0 ceph-mgr[74651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 15 10:35:12 compute-0 ceph-mon[74356]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec 15 10:35:12 compute-0 ceph-mon[74356]: mgrmap e7: compute-0.difmqj(active, since 1.31411s)
Dec 15 10:35:12 compute-0 ceph-mon[74356]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 15 10:35:12 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:12 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 15 10:35:12 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:35:12 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Dec 15 10:35:12 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:12 compute-0 ceph-mgr[74651]: [cephadm INFO root] Set ssh ssh_user
Dec 15 10:35:12 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Dec 15 10:35:12 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Dec 15 10:35:12 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:12 compute-0 ceph-mgr[74651]: [cephadm INFO root] Set ssh ssh_config
Dec 15 10:35:12 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Dec 15 10:35:12 compute-0 ceph-mgr[74651]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Dec 15 10:35:12 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Dec 15 10:35:12 compute-0 funny_robinson[75414]: ssh user set to ceph-admin. sudo will be used
Dec 15 10:35:12 compute-0 systemd[1]: libpod-3231b5e7835d7fd631dfb9bdcc5bb5967af74929aab3a966176fc09031a90617.scope: Deactivated successfully.
Dec 15 10:35:12 compute-0 podman[75398]: 2025-12-15 10:35:12.788887097 +0000 UTC m=+0.499761026 container died 3231b5e7835d7fd631dfb9bdcc5bb5967af74929aab3a966176fc09031a90617 (image=quay.io/ceph/ceph:v19, name=funny_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 15 10:35:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f062040b2dd6f9c0aa712f1a02b551dd741c2655ffb45e7ad4c695f967aae85-merged.mount: Deactivated successfully.
Dec 15 10:35:12 compute-0 podman[75398]: 2025-12-15 10:35:12.821285572 +0000 UTC m=+0.532159501 container remove 3231b5e7835d7fd631dfb9bdcc5bb5967af74929aab3a966176fc09031a90617 (image=quay.io/ceph/ceph:v19, name=funny_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:35:12 compute-0 systemd[1]: libpod-conmon-3231b5e7835d7fd631dfb9bdcc5bb5967af74929aab3a966176fc09031a90617.scope: Deactivated successfully.
Dec 15 10:35:12 compute-0 podman[75453]: 2025-12-15 10:35:12.887837795 +0000 UTC m=+0.048287299 container create 1b91eca9962ca475dc5988781918644e5fd2db85aa8a6d6cf63962797af75119 (image=quay.io/ceph/ceph:v19, name=beautiful_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:35:12 compute-0 systemd[1]: Started libpod-conmon-1b91eca9962ca475dc5988781918644e5fd2db85aa8a6d6cf63962797af75119.scope.
Dec 15 10:35:12 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:12 compute-0 podman[75453]: 2025-12-15 10:35:12.858827855 +0000 UTC m=+0.019277379 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a87f706074426ad2b58369ac21deb132796445aafb94140d0478dbb9975d9db/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a87f706074426ad2b58369ac21deb132796445aafb94140d0478dbb9975d9db/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a87f706074426ad2b58369ac21deb132796445aafb94140d0478dbb9975d9db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a87f706074426ad2b58369ac21deb132796445aafb94140d0478dbb9975d9db/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a87f706074426ad2b58369ac21deb132796445aafb94140d0478dbb9975d9db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:12 compute-0 podman[75453]: 2025-12-15 10:35:12.969149936 +0000 UTC m=+0.129599450 container init 1b91eca9962ca475dc5988781918644e5fd2db85aa8a6d6cf63962797af75119 (image=quay.io/ceph/ceph:v19, name=beautiful_chatterjee, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True)
Dec 15 10:35:12 compute-0 podman[75453]: 2025-12-15 10:35:12.978328301 +0000 UTC m=+0.138777805 container start 1b91eca9962ca475dc5988781918644e5fd2db85aa8a6d6cf63962797af75119 (image=quay.io/ceph/ceph:v19, name=beautiful_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Dec 15 10:35:12 compute-0 podman[75453]: 2025-12-15 10:35:12.991264442 +0000 UTC m=+0.151713936 container attach 1b91eca9962ca475dc5988781918644e5fd2db85aa8a6d6cf63962797af75119 (image=quay.io/ceph/ceph:v19, name=beautiful_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:35:13 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.difmqj(active, since 2s)
Dec 15 10:35:13 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:35:13 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Dec 15 10:35:13 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:13 compute-0 ceph-mgr[74651]: [cephadm INFO root] Set ssh ssh_identity_key
Dec 15 10:35:13 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Dec 15 10:35:13 compute-0 ceph-mgr[74651]: [cephadm INFO root] Set ssh private key
Dec 15 10:35:13 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Set ssh private key
Dec 15 10:35:13 compute-0 systemd[1]: libpod-1b91eca9962ca475dc5988781918644e5fd2db85aa8a6d6cf63962797af75119.scope: Deactivated successfully.
Dec 15 10:35:13 compute-0 conmon[75469]: conmon 1b91eca9962ca475dc59 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1b91eca9962ca475dc5988781918644e5fd2db85aa8a6d6cf63962797af75119.scope/container/memory.events
Dec 15 10:35:13 compute-0 podman[75453]: 2025-12-15 10:35:13.363015538 +0000 UTC m=+0.523465062 container died 1b91eca9962ca475dc5988781918644e5fd2db85aa8a6d6cf63962797af75119 (image=quay.io/ceph/ceph:v19, name=beautiful_chatterjee, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 15 10:35:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a87f706074426ad2b58369ac21deb132796445aafb94140d0478dbb9975d9db-merged.mount: Deactivated successfully.
Dec 15 10:35:13 compute-0 ceph-mgr[74651]: [cephadm INFO cherrypy.error] [15/Dec/2025:10:35:13] ENGINE Bus STARTING
Dec 15 10:35:13 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : [15/Dec/2025:10:35:13] ENGINE Bus STARTING
Dec 15 10:35:13 compute-0 podman[75453]: 2025-12-15 10:35:13.395464064 +0000 UTC m=+0.555913568 container remove 1b91eca9962ca475dc5988781918644e5fd2db85aa8a6d6cf63962797af75119 (image=quay.io/ceph/ceph:v19, name=beautiful_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:35:13 compute-0 systemd[1]: libpod-conmon-1b91eca9962ca475dc5988781918644e5fd2db85aa8a6d6cf63962797af75119.scope: Deactivated successfully.
Dec 15 10:35:13 compute-0 podman[75516]: 2025-12-15 10:35:13.44824159 +0000 UTC m=+0.036105250 container create 9715c128ce055b9d3d528f62de34027ba1f2a59c1da3ce33d7efb0be8051130e (image=quay.io/ceph/ceph:v19, name=upbeat_chaplygin, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 15 10:35:13 compute-0 systemd[1]: Started libpod-conmon-9715c128ce055b9d3d528f62de34027ba1f2a59c1da3ce33d7efb0be8051130e.scope.
Dec 15 10:35:13 compute-0 ceph-mgr[74651]: [cephadm INFO cherrypy.error] [15/Dec/2025:10:35:13] ENGINE Serving on http://192.168.122.100:8765
Dec 15 10:35:13 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : [15/Dec/2025:10:35:13] ENGINE Serving on http://192.168.122.100:8765
Dec 15 10:35:13 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708c5245853844ba11c2aa5d4771b380fde1221b05f4d1912e5fa540bdf6deb9/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708c5245853844ba11c2aa5d4771b380fde1221b05f4d1912e5fa540bdf6deb9/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708c5245853844ba11c2aa5d4771b380fde1221b05f4d1912e5fa540bdf6deb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708c5245853844ba11c2aa5d4771b380fde1221b05f4d1912e5fa540bdf6deb9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708c5245853844ba11c2aa5d4771b380fde1221b05f4d1912e5fa540bdf6deb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:13 compute-0 podman[75516]: 2025-12-15 10:35:13.526517007 +0000 UTC m=+0.114380687 container init 9715c128ce055b9d3d528f62de34027ba1f2a59c1da3ce33d7efb0be8051130e (image=quay.io/ceph/ceph:v19, name=upbeat_chaplygin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:35:13 compute-0 podman[75516]: 2025-12-15 10:35:13.433909506 +0000 UTC m=+0.021773186 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:13 compute-0 podman[75516]: 2025-12-15 10:35:13.532747241 +0000 UTC m=+0.120610901 container start 9715c128ce055b9d3d528f62de34027ba1f2a59c1da3ce33d7efb0be8051130e (image=quay.io/ceph/ceph:v19, name=upbeat_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 15 10:35:13 compute-0 podman[75516]: 2025-12-15 10:35:13.536317961 +0000 UTC m=+0.124181621 container attach 9715c128ce055b9d3d528f62de34027ba1f2a59c1da3ce33d7efb0be8051130e (image=quay.io/ceph/ceph:v19, name=upbeat_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 15 10:35:13 compute-0 ceph-mgr[74651]: [cephadm INFO cherrypy.error] [15/Dec/2025:10:35:13] ENGINE Serving on https://192.168.122.100:7150
Dec 15 10:35:13 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : [15/Dec/2025:10:35:13] ENGINE Serving on https://192.168.122.100:7150
Dec 15 10:35:13 compute-0 ceph-mgr[74651]: [cephadm INFO cherrypy.error] [15/Dec/2025:10:35:13] ENGINE Bus STARTED
Dec 15 10:35:13 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : [15/Dec/2025:10:35:13] ENGINE Bus STARTED
Dec 15 10:35:13 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 15 10:35:13 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 15 10:35:13 compute-0 ceph-mgr[74651]: [cephadm INFO cherrypy.error] [15/Dec/2025:10:35:13] ENGINE Client ('192.168.122.100', 42066) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 15 10:35:13 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : [15/Dec/2025:10:35:13] ENGINE Client ('192.168.122.100', 42066) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 15 10:35:13 compute-0 ceph-mon[74356]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:35:13 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:13 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:13 compute-0 ceph-mon[74356]: mgrmap e8: compute-0.difmqj(active, since 2s)
Dec 15 10:35:13 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:13 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 15 10:35:13 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:35:13 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Dec 15 10:35:14 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:14 compute-0 ceph-mgr[74651]: [cephadm INFO root] Set ssh ssh_identity_pub
Dec 15 10:35:14 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Dec 15 10:35:14 compute-0 systemd[1]: libpod-9715c128ce055b9d3d528f62de34027ba1f2a59c1da3ce33d7efb0be8051130e.scope: Deactivated successfully.
Dec 15 10:35:14 compute-0 podman[75516]: 2025-12-15 10:35:14.614099387 +0000 UTC m=+1.201963057 container died 9715c128ce055b9d3d528f62de34027ba1f2a59c1da3ce33d7efb0be8051130e (image=quay.io/ceph/ceph:v19, name=upbeat_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:35:14 compute-0 ceph-mgr[74651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 15 10:35:14 compute-0 ceph-mon[74356]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:35:14 compute-0 ceph-mon[74356]: Set ssh ssh_user
Dec 15 10:35:14 compute-0 ceph-mon[74356]: Set ssh ssh_config
Dec 15 10:35:14 compute-0 ceph-mon[74356]: ssh user set to ceph-admin. sudo will be used
Dec 15 10:35:14 compute-0 ceph-mon[74356]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:35:14 compute-0 ceph-mon[74356]: Set ssh ssh_identity_key
Dec 15 10:35:14 compute-0 ceph-mon[74356]: Set ssh private key
Dec 15 10:35:14 compute-0 ceph-mon[74356]: [15/Dec/2025:10:35:13] ENGINE Bus STARTING
Dec 15 10:35:14 compute-0 ceph-mon[74356]: [15/Dec/2025:10:35:13] ENGINE Serving on http://192.168.122.100:8765
Dec 15 10:35:14 compute-0 ceph-mon[74356]: [15/Dec/2025:10:35:13] ENGINE Serving on https://192.168.122.100:7150
Dec 15 10:35:14 compute-0 ceph-mon[74356]: [15/Dec/2025:10:35:13] ENGINE Bus STARTED
Dec 15 10:35:14 compute-0 ceph-mon[74356]: [15/Dec/2025:10:35:13] ENGINE Client ('192.168.122.100', 42066) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 15 10:35:14 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-708c5245853844ba11c2aa5d4771b380fde1221b05f4d1912e5fa540bdf6deb9-merged.mount: Deactivated successfully.
Dec 15 10:35:15 compute-0 podman[75516]: 2025-12-15 10:35:15.027803865 +0000 UTC m=+1.615667525 container remove 9715c128ce055b9d3d528f62de34027ba1f2a59c1da3ce33d7efb0be8051130e (image=quay.io/ceph/ceph:v19, name=upbeat_chaplygin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 15 10:35:15 compute-0 systemd[1]: libpod-conmon-9715c128ce055b9d3d528f62de34027ba1f2a59c1da3ce33d7efb0be8051130e.scope: Deactivated successfully.
Dec 15 10:35:15 compute-0 podman[75581]: 2025-12-15 10:35:15.080558451 +0000 UTC m=+0.035774210 container create 539851d861c45410cc58f419dd2813abf8940ab6cf480e9d61ba6612fe6f241c (image=quay.io/ceph/ceph:v19, name=objective_margulis, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 15 10:35:15 compute-0 systemd[1]: Started libpod-conmon-539851d861c45410cc58f419dd2813abf8940ab6cf480e9d61ba6612fe6f241c.scope.
Dec 15 10:35:15 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7cab6108551253a8da5867bd56bc56367fdece6b6b1ff488e52d08ac4a5cae0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7cab6108551253a8da5867bd56bc56367fdece6b6b1ff488e52d08ac4a5cae0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7cab6108551253a8da5867bd56bc56367fdece6b6b1ff488e52d08ac4a5cae0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:15 compute-0 podman[75581]: 2025-12-15 10:35:15.14086024 +0000 UTC m=+0.096076039 container init 539851d861c45410cc58f419dd2813abf8940ab6cf480e9d61ba6612fe6f241c (image=quay.io/ceph/ceph:v19, name=objective_margulis, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 15 10:35:15 compute-0 podman[75581]: 2025-12-15 10:35:15.145268937 +0000 UTC m=+0.100484706 container start 539851d861c45410cc58f419dd2813abf8940ab6cf480e9d61ba6612fe6f241c (image=quay.io/ceph/ceph:v19, name=objective_margulis, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 15 10:35:15 compute-0 podman[75581]: 2025-12-15 10:35:15.148508607 +0000 UTC m=+0.103724396 container attach 539851d861c45410cc58f419dd2813abf8940ab6cf480e9d61ba6612fe6f241c (image=quay.io/ceph/ceph:v19, name=objective_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True)
Dec 15 10:35:15 compute-0 podman[75581]: 2025-12-15 10:35:15.062868882 +0000 UTC m=+0.018084681 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:15 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:35:15 compute-0 objective_margulis[75598]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvNcUM8XWpPBC+aDknaEBX1H1Qemu+7c7TVJGgXYoJgFnGNcjzecNgnW+Q1S/aH7ClHK/mopETfJgx66KHA9ywbXtD8DwcprOWyaQ7PdJl3+3ylEPH1/U5TwLdVVdDYnz6C/88bD95TCFoYJCriWOahwW6ImC0DzTgaBuC/FqMxKU/Ns572oUXE9TINukiWZDldCIsYATKqUhCifOxqfw1akH/CpB1Cipi7JioUM9QFlmwOdgT4es2cVJBcTKx/Oi220PxMKPcO0ll5FuNTM/8bTg+kCxFyscfzfVe2Jh2VIrIW9upxS/AOruYtfQGKbjDFxaxNTO2wQoSN7aDQ5l8iluNe2VjEUOtkoQeVYeLMY3Qj3vQXPr9NYs0Vo4E+y+CTX2VA7wKCRSkwDjBXPm07uxlFHNb1mxDyEJFZySEDGyfnBwCdM6MywUMKxAlKRLaZKyDqEaRZRjJtIg99I8U/RKgOH7942KyReb7FVBEWOYYQCVAwccfxa+qahsaKKk= zuul@controller
Dec 15 10:35:15 compute-0 systemd[1]: libpod-539851d861c45410cc58f419dd2813abf8940ab6cf480e9d61ba6612fe6f241c.scope: Deactivated successfully.
Dec 15 10:35:15 compute-0 podman[75581]: 2025-12-15 10:35:15.496800766 +0000 UTC m=+0.452016535 container died 539851d861c45410cc58f419dd2813abf8940ab6cf480e9d61ba6612fe6f241c (image=quay.io/ceph/ceph:v19, name=objective_margulis, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 15 10:35:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7cab6108551253a8da5867bd56bc56367fdece6b6b1ff488e52d08ac4a5cae0-merged.mount: Deactivated successfully.
Dec 15 10:35:15 compute-0 podman[75581]: 2025-12-15 10:35:15.532357068 +0000 UTC m=+0.487572827 container remove 539851d861c45410cc58f419dd2813abf8940ab6cf480e9d61ba6612fe6f241c (image=quay.io/ceph/ceph:v19, name=objective_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:35:15 compute-0 systemd[1]: libpod-conmon-539851d861c45410cc58f419dd2813abf8940ab6cf480e9d61ba6612fe6f241c.scope: Deactivated successfully.
Dec 15 10:35:15 compute-0 podman[75636]: 2025-12-15 10:35:15.594956099 +0000 UTC m=+0.041515098 container create 14b6f588f189ae194ea94a825f1995f8ef3fe603c842553a5f5bc9a9340dde9f (image=quay.io/ceph/ceph:v19, name=dazzling_davinci, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 15 10:35:15 compute-0 systemd[1]: Started libpod-conmon-14b6f588f189ae194ea94a825f1995f8ef3fe603c842553a5f5bc9a9340dde9f.scope.
Dec 15 10:35:15 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35a35ee92009ce200720d428ee239035f9475b7a2feb8d56d6e32b787040a3d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35a35ee92009ce200720d428ee239035f9475b7a2feb8d56d6e32b787040a3d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35a35ee92009ce200720d428ee239035f9475b7a2feb8d56d6e32b787040a3d1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:15 compute-0 podman[75636]: 2025-12-15 10:35:15.656605991 +0000 UTC m=+0.103165010 container init 14b6f588f189ae194ea94a825f1995f8ef3fe603c842553a5f5bc9a9340dde9f (image=quay.io/ceph/ceph:v19, name=dazzling_davinci, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 15 10:35:15 compute-0 podman[75636]: 2025-12-15 10:35:15.660960505 +0000 UTC m=+0.107519504 container start 14b6f588f189ae194ea94a825f1995f8ef3fe603c842553a5f5bc9a9340dde9f (image=quay.io/ceph/ceph:v19, name=dazzling_davinci, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:35:15 compute-0 podman[75636]: 2025-12-15 10:35:15.663806814 +0000 UTC m=+0.110365813 container attach 14b6f588f189ae194ea94a825f1995f8ef3fe603c842553a5f5bc9a9340dde9f (image=quay.io/ceph/ceph:v19, name=dazzling_davinci, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 15 10:35:15 compute-0 podman[75636]: 2025-12-15 10:35:15.578550591 +0000 UTC m=+0.025109620 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:15 compute-0 ceph-mon[74356]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:35:15 compute-0 ceph-mon[74356]: Set ssh ssh_identity_pub
Dec 15 10:35:15 compute-0 ceph-mon[74356]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:35:16 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:35:16 compute-0 sshd-session[75678]: Accepted publickey for ceph-admin from 192.168.122.100 port 45946 ssh2: RSA SHA256:3azC1xJ0J6DfmukNQYX52hxlEZBJA1o57dtmni4O/KI
Dec 15 10:35:16 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 15 10:35:16 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053166 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:35:16 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 15 10:35:16 compute-0 systemd-logind[797]: New session 21 of user ceph-admin.
Dec 15 10:35:16 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 15 10:35:16 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 15 10:35:16 compute-0 systemd[75682]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 15 10:35:16 compute-0 systemd[75682]: Queued start job for default target Main User Target.
Dec 15 10:35:16 compute-0 sshd-session[75696]: Accepted publickey for ceph-admin from 192.168.122.100 port 45952 ssh2: RSA SHA256:3azC1xJ0J6DfmukNQYX52hxlEZBJA1o57dtmni4O/KI
Dec 15 10:35:16 compute-0 systemd[75682]: Created slice User Application Slice.
Dec 15 10:35:16 compute-0 systemd[75682]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 15 10:35:16 compute-0 systemd[75682]: Started Daily Cleanup of User's Temporary Directories.
Dec 15 10:35:16 compute-0 systemd[75682]: Reached target Paths.
Dec 15 10:35:16 compute-0 systemd[75682]: Reached target Timers.
Dec 15 10:35:16 compute-0 systemd-logind[797]: New session 23 of user ceph-admin.
Dec 15 10:35:16 compute-0 systemd[75682]: Starting D-Bus User Message Bus Socket...
Dec 15 10:35:16 compute-0 systemd[75682]: Starting Create User's Volatile Files and Directories...
Dec 15 10:35:16 compute-0 systemd[75682]: Finished Create User's Volatile Files and Directories.
Dec 15 10:35:16 compute-0 systemd[75682]: Listening on D-Bus User Message Bus Socket.
Dec 15 10:35:16 compute-0 systemd[75682]: Reached target Sockets.
Dec 15 10:35:16 compute-0 systemd[75682]: Reached target Basic System.
Dec 15 10:35:16 compute-0 systemd[75682]: Reached target Main User Target.
Dec 15 10:35:16 compute-0 systemd[75682]: Startup finished in 140ms.
Dec 15 10:35:16 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 15 10:35:16 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Dec 15 10:35:16 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Dec 15 10:35:16 compute-0 sshd-session[75678]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 15 10:35:16 compute-0 sshd-session[75696]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 15 10:35:16 compute-0 sudo[75703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:35:16 compute-0 sudo[75703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:16 compute-0 sudo[75703]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:16 compute-0 ceph-mgr[74651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 15 10:35:16 compute-0 sshd-session[75728]: Accepted publickey for ceph-admin from 192.168.122.100 port 45956 ssh2: RSA SHA256:3azC1xJ0J6DfmukNQYX52hxlEZBJA1o57dtmni4O/KI
Dec 15 10:35:16 compute-0 systemd-logind[797]: New session 24 of user ceph-admin.
Dec 15 10:35:16 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Dec 15 10:35:16 compute-0 sshd-session[75728]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 15 10:35:16 compute-0 sudo[75732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Dec 15 10:35:16 compute-0 sudo[75732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:16 compute-0 sudo[75732]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:17 compute-0 ceph-mon[74356]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:35:17 compute-0 sshd-session[75757]: Accepted publickey for ceph-admin from 192.168.122.100 port 45958 ssh2: RSA SHA256:3azC1xJ0J6DfmukNQYX52hxlEZBJA1o57dtmni4O/KI
Dec 15 10:35:17 compute-0 systemd-logind[797]: New session 25 of user ceph-admin.
Dec 15 10:35:17 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Dec 15 10:35:17 compute-0 sshd-session[75757]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 15 10:35:17 compute-0 sudo[75761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Dec 15 10:35:17 compute-0 sudo[75761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:17 compute-0 sudo[75761]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:17 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Dec 15 10:35:17 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Dec 15 10:35:17 compute-0 sshd-session[75786]: Accepted publickey for ceph-admin from 192.168.122.100 port 45964 ssh2: RSA SHA256:3azC1xJ0J6DfmukNQYX52hxlEZBJA1o57dtmni4O/KI
Dec 15 10:35:17 compute-0 systemd-logind[797]: New session 26 of user ceph-admin.
Dec 15 10:35:17 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Dec 15 10:35:17 compute-0 sshd-session[75786]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 15 10:35:17 compute-0 sudo[75790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:35:17 compute-0 sudo[75790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:17 compute-0 sudo[75790]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:17 compute-0 sshd-session[75815]: Accepted publickey for ceph-admin from 192.168.122.100 port 45978 ssh2: RSA SHA256:3azC1xJ0J6DfmukNQYX52hxlEZBJA1o57dtmni4O/KI
Dec 15 10:35:17 compute-0 systemd-logind[797]: New session 27 of user ceph-admin.
Dec 15 10:35:17 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Dec 15 10:35:17 compute-0 sshd-session[75815]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 15 10:35:17 compute-0 sudo[75819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:35:17 compute-0 sudo[75819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:17 compute-0 sudo[75819]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:18 compute-0 ceph-mon[74356]: Deploying cephadm binary to compute-0
Dec 15 10:35:18 compute-0 sshd-session[75844]: Accepted publickey for ceph-admin from 192.168.122.100 port 45994 ssh2: RSA SHA256:3azC1xJ0J6DfmukNQYX52hxlEZBJA1o57dtmni4O/KI
Dec 15 10:35:18 compute-0 systemd-logind[797]: New session 28 of user ceph-admin.
Dec 15 10:35:18 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Dec 15 10:35:18 compute-0 sshd-session[75844]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 15 10:35:18 compute-0 sudo[75848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Dec 15 10:35:18 compute-0 sudo[75848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:18 compute-0 sudo[75848]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:18 compute-0 sshd-session[75873]: Accepted publickey for ceph-admin from 192.168.122.100 port 46000 ssh2: RSA SHA256:3azC1xJ0J6DfmukNQYX52hxlEZBJA1o57dtmni4O/KI
Dec 15 10:35:18 compute-0 systemd-logind[797]: New session 29 of user ceph-admin.
Dec 15 10:35:18 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Dec 15 10:35:18 compute-0 sshd-session[75873]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 15 10:35:18 compute-0 sudo[75877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:35:18 compute-0 sudo[75877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:18 compute-0 sudo[75877]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:18 compute-0 ceph-mgr[74651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 15 10:35:18 compute-0 sshd-session[75902]: Accepted publickey for ceph-admin from 192.168.122.100 port 46008 ssh2: RSA SHA256:3azC1xJ0J6DfmukNQYX52hxlEZBJA1o57dtmni4O/KI
Dec 15 10:35:18 compute-0 systemd-logind[797]: New session 30 of user ceph-admin.
Dec 15 10:35:18 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Dec 15 10:35:18 compute-0 sshd-session[75902]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 15 10:35:18 compute-0 sudo[75906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Dec 15 10:35:18 compute-0 sudo[75906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:18 compute-0 sudo[75906]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:19 compute-0 sshd-session[75931]: Accepted publickey for ceph-admin from 192.168.122.100 port 46020 ssh2: RSA SHA256:3azC1xJ0J6DfmukNQYX52hxlEZBJA1o57dtmni4O/KI
Dec 15 10:35:19 compute-0 systemd-logind[797]: New session 31 of user ceph-admin.
Dec 15 10:35:19 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Dec 15 10:35:19 compute-0 sshd-session[75931]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 15 10:35:20 compute-0 sshd-session[75958]: Accepted publickey for ceph-admin from 192.168.122.100 port 41716 ssh2: RSA SHA256:3azC1xJ0J6DfmukNQYX52hxlEZBJA1o57dtmni4O/KI
Dec 15 10:35:20 compute-0 systemd-logind[797]: New session 32 of user ceph-admin.
Dec 15 10:35:20 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Dec 15 10:35:20 compute-0 sshd-session[75958]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 15 10:35:20 compute-0 sudo[75962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Dec 15 10:35:20 compute-0 sudo[75962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:20 compute-0 sudo[75962]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:20 compute-0 sshd-session[75987]: Accepted publickey for ceph-admin from 192.168.122.100 port 41722 ssh2: RSA SHA256:3azC1xJ0J6DfmukNQYX52hxlEZBJA1o57dtmni4O/KI
Dec 15 10:35:20 compute-0 systemd-logind[797]: New session 33 of user ceph-admin.
Dec 15 10:35:20 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Dec 15 10:35:20 compute-0 sshd-session[75987]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 15 10:35:20 compute-0 ceph-mgr[74651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 15 10:35:20 compute-0 sudo[75991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Dec 15 10:35:20 compute-0 sudo[75991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:21 compute-0 sudo[75991]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 15 10:35:21 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:21 compute-0 ceph-mgr[74651]: [cephadm INFO root] Added host compute-0
Dec 15 10:35:21 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 15 10:35:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 15 10:35:21 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 15 10:35:21 compute-0 dazzling_davinci[75652]: Added host 'compute-0' with addr '192.168.122.100'
Dec 15 10:35:21 compute-0 systemd[1]: libpod-14b6f588f189ae194ea94a825f1995f8ef3fe603c842553a5f5bc9a9340dde9f.scope: Deactivated successfully.
Dec 15 10:35:21 compute-0 podman[75636]: 2025-12-15 10:35:21.0963243 +0000 UTC m=+5.542883299 container died 14b6f588f189ae194ea94a825f1995f8ef3fe603c842553a5f5bc9a9340dde9f (image=quay.io/ceph/ceph:v19, name=dazzling_davinci, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:35:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-35a35ee92009ce200720d428ee239035f9475b7a2feb8d56d6e32b787040a3d1-merged.mount: Deactivated successfully.
Dec 15 10:35:21 compute-0 sudo[76036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:35:21 compute-0 sudo[76036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:21 compute-0 podman[75636]: 2025-12-15 10:35:21.143238614 +0000 UTC m=+5.589797613 container remove 14b6f588f189ae194ea94a825f1995f8ef3fe603c842553a5f5bc9a9340dde9f (image=quay.io/ceph/ceph:v19, name=dazzling_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:35:21 compute-0 sudo[76036]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:21 compute-0 systemd[1]: libpod-conmon-14b6f588f189ae194ea94a825f1995f8ef3fe603c842553a5f5bc9a9340dde9f.scope: Deactivated successfully.
Dec 15 10:35:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054712 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:35:21 compute-0 sudo[76073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 pull
Dec 15 10:35:21 compute-0 podman[76074]: 2025-12-15 10:35:21.21210068 +0000 UTC m=+0.044175471 container create 2be4363aab7c5ad93174a03a0a34250acd5cee9b6263f4c32293eea6e971126c (image=quay.io/ceph/ceph:v19, name=sharp_matsumoto, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 15 10:35:21 compute-0 sudo[76073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:21 compute-0 systemd[1]: Started libpod-conmon-2be4363aab7c5ad93174a03a0a34250acd5cee9b6263f4c32293eea6e971126c.scope.
Dec 15 10:35:21 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:21 compute-0 podman[76074]: 2025-12-15 10:35:21.192004877 +0000 UTC m=+0.024079698 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60c3fec7ef4a6bd4663644ee43d6ad632fe889cf56943f6e45b53614bc3e7a46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60c3fec7ef4a6bd4663644ee43d6ad632fe889cf56943f6e45b53614bc3e7a46/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60c3fec7ef4a6bd4663644ee43d6ad632fe889cf56943f6e45b53614bc3e7a46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:21 compute-0 podman[76074]: 2025-12-15 10:35:21.308533179 +0000 UTC m=+0.140608000 container init 2be4363aab7c5ad93174a03a0a34250acd5cee9b6263f4c32293eea6e971126c (image=quay.io/ceph/ceph:v19, name=sharp_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 15 10:35:21 compute-0 podman[76074]: 2025-12-15 10:35:21.316052032 +0000 UTC m=+0.148126823 container start 2be4363aab7c5ad93174a03a0a34250acd5cee9b6263f4c32293eea6e971126c (image=quay.io/ceph/ceph:v19, name=sharp_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:35:21 compute-0 podman[76074]: 2025-12-15 10:35:21.320048377 +0000 UTC m=+0.152123228 container attach 2be4363aab7c5ad93174a03a0a34250acd5cee9b6263f4c32293eea6e971126c (image=quay.io/ceph/ceph:v19, name=sharp_matsumoto, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:35:21 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:35:21 compute-0 ceph-mgr[74651]: [cephadm INFO root] Saving service mon spec with placement count:5
Dec 15 10:35:21 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Dec 15 10:35:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 15 10:35:21 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:21 compute-0 sharp_matsumoto[76114]: Scheduled mon update...
Dec 15 10:35:21 compute-0 systemd[1]: libpod-2be4363aab7c5ad93174a03a0a34250acd5cee9b6263f4c32293eea6e971126c.scope: Deactivated successfully.
Dec 15 10:35:21 compute-0 podman[76074]: 2025-12-15 10:35:21.777380317 +0000 UTC m=+0.609455108 container died 2be4363aab7c5ad93174a03a0a34250acd5cee9b6263f4c32293eea6e971126c (image=quay.io/ceph/ceph:v19, name=sharp_matsumoto, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:35:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-60c3fec7ef4a6bd4663644ee43d6ad632fe889cf56943f6e45b53614bc3e7a46-merged.mount: Deactivated successfully.
Dec 15 10:35:22 compute-0 podman[76074]: 2025-12-15 10:35:22.018621635 +0000 UTC m=+0.850696426 container remove 2be4363aab7c5ad93174a03a0a34250acd5cee9b6263f4c32293eea6e971126c (image=quay.io/ceph/ceph:v19, name=sharp_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 15 10:35:22 compute-0 systemd[1]: libpod-conmon-2be4363aab7c5ad93174a03a0a34250acd5cee9b6263f4c32293eea6e971126c.scope: Deactivated successfully.
Dec 15 10:35:22 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:22 compute-0 ceph-mon[74356]: Added host compute-0
Dec 15 10:35:22 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 15 10:35:22 compute-0 ceph-mon[74356]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:35:22 compute-0 ceph-mon[74356]: Saving service mon spec with placement count:5
Dec 15 10:35:22 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:22 compute-0 podman[76176]: 2025-12-15 10:35:22.083995613 +0000 UTC m=+0.047900097 container create 7034f7ad7447396ced9e3efe51377832eadfc6141edaad9c92a8d4d17df239d9 (image=quay.io/ceph/ceph:v19, name=loving_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True)
Dec 15 10:35:22 compute-0 systemd[1]: Started libpod-conmon-7034f7ad7447396ced9e3efe51377832eadfc6141edaad9c92a8d4d17df239d9.scope.
Dec 15 10:35:22 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e91998fdcc433b525f3453f1a0588b6e23c2ec0829ce05afe66943d8f3e0eb1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e91998fdcc433b525f3453f1a0588b6e23c2ec0829ce05afe66943d8f3e0eb1d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e91998fdcc433b525f3453f1a0588b6e23c2ec0829ce05afe66943d8f3e0eb1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:22 compute-0 podman[76176]: 2025-12-15 10:35:22.05874272 +0000 UTC m=+0.022647204 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:22 compute-0 podman[76176]: 2025-12-15 10:35:22.182683963 +0000 UTC m=+0.146588467 container init 7034f7ad7447396ced9e3efe51377832eadfc6141edaad9c92a8d4d17df239d9 (image=quay.io/ceph/ceph:v19, name=loving_mcnulty, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 15 10:35:22 compute-0 podman[76176]: 2025-12-15 10:35:22.188456312 +0000 UTC m=+0.152360796 container start 7034f7ad7447396ced9e3efe51377832eadfc6141edaad9c92a8d4d17df239d9 (image=quay.io/ceph/ceph:v19, name=loving_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:35:22 compute-0 podman[76176]: 2025-12-15 10:35:22.246833322 +0000 UTC m=+0.210737816 container attach 7034f7ad7447396ced9e3efe51377832eadfc6141edaad9c92a8d4d17df239d9 (image=quay.io/ceph/ceph:v19, name=loving_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:35:22 compute-0 podman[76149]: 2025-12-15 10:35:22.279472903 +0000 UTC m=+0.813512743 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:22 compute-0 podman[76227]: 2025-12-15 10:35:22.3825921 +0000 UTC m=+0.040887278 container create ba388b123bc4bbdde84b9bfc32b73db7560481b5c4017869b33a18a09cba0688 (image=quay.io/ceph/ceph:v19, name=quirky_dijkstra, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:35:22 compute-0 systemd[1]: Started libpod-conmon-ba388b123bc4bbdde84b9bfc32b73db7560481b5c4017869b33a18a09cba0688.scope.
Dec 15 10:35:22 compute-0 podman[76227]: 2025-12-15 10:35:22.363971683 +0000 UTC m=+0.022266881 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:22 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:22 compute-0 podman[76227]: 2025-12-15 10:35:22.524892633 +0000 UTC m=+0.183187821 container init ba388b123bc4bbdde84b9bfc32b73db7560481b5c4017869b33a18a09cba0688 (image=quay.io/ceph/ceph:v19, name=quirky_dijkstra, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 15 10:35:22 compute-0 podman[76227]: 2025-12-15 10:35:22.531553369 +0000 UTC m=+0.189848547 container start ba388b123bc4bbdde84b9bfc32b73db7560481b5c4017869b33a18a09cba0688 (image=quay.io/ceph/ceph:v19, name=quirky_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:35:22 compute-0 podman[76227]: 2025-12-15 10:35:22.570822257 +0000 UTC m=+0.229117455 container attach ba388b123bc4bbdde84b9bfc32b73db7560481b5c4017869b33a18a09cba0688 (image=quay.io/ceph/ceph:v19, name=quirky_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 15 10:35:22 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:35:22 compute-0 ceph-mgr[74651]: [cephadm INFO root] Saving service mgr spec with placement count:2
Dec 15 10:35:22 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Dec 15 10:35:22 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 15 10:35:22 compute-0 quirky_dijkstra[76243]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Dec 15 10:35:22 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:22 compute-0 ceph-mgr[74651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 15 10:35:22 compute-0 loving_mcnulty[76192]: Scheduled mgr update...
Dec 15 10:35:22 compute-0 systemd[1]: libpod-ba388b123bc4bbdde84b9bfc32b73db7560481b5c4017869b33a18a09cba0688.scope: Deactivated successfully.
Dec 15 10:35:22 compute-0 podman[76227]: 2025-12-15 10:35:22.632565611 +0000 UTC m=+0.290860789 container died ba388b123bc4bbdde84b9bfc32b73db7560481b5c4017869b33a18a09cba0688 (image=quay.io/ceph/ceph:v19, name=quirky_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 15 10:35:22 compute-0 systemd[1]: libpod-7034f7ad7447396ced9e3efe51377832eadfc6141edaad9c92a8d4d17df239d9.scope: Deactivated successfully.
Dec 15 10:35:22 compute-0 podman[76176]: 2025-12-15 10:35:22.650748435 +0000 UTC m=+0.614652929 container died 7034f7ad7447396ced9e3efe51377832eadfc6141edaad9c92a8d4d17df239d9 (image=quay.io/ceph/ceph:v19, name=loving_mcnulty, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:35:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e320baacb1f9ac36b83cb1e9c96a8a63465e09ab6f82828876d4c39bc66c27a-merged.mount: Deactivated successfully.
Dec 15 10:35:23 compute-0 podman[76227]: 2025-12-15 10:35:23.008359033 +0000 UTC m=+0.666654211 container remove ba388b123bc4bbdde84b9bfc32b73db7560481b5c4017869b33a18a09cba0688 (image=quay.io/ceph/ceph:v19, name=quirky_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 15 10:35:23 compute-0 sudo[76073]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:23 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Dec 15 10:35:23 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-e91998fdcc433b525f3453f1a0588b6e23c2ec0829ce05afe66943d8f3e0eb1d-merged.mount: Deactivated successfully.
Dec 15 10:35:23 compute-0 sudo[76275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:35:23 compute-0 sudo[76275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:23 compute-0 sudo[76275]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:23 compute-0 podman[76176]: 2025-12-15 10:35:23.371301736 +0000 UTC m=+1.335206220 container remove 7034f7ad7447396ced9e3efe51377832eadfc6141edaad9c92a8d4d17df239d9 (image=quay.io/ceph/ceph:v19, name=loving_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:35:23 compute-0 systemd[1]: libpod-conmon-7034f7ad7447396ced9e3efe51377832eadfc6141edaad9c92a8d4d17df239d9.scope: Deactivated successfully.
Dec 15 10:35:23 compute-0 sudo[76300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Dec 15 10:35:23 compute-0 sudo[76300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:23 compute-0 podman[76323]: 2025-12-15 10:35:23.467160158 +0000 UTC m=+0.073367066 container create b59e99274a92c8d699e89f30b684bcb4711c8e7cb8fb0f5c4b46ada4205bc7c1 (image=quay.io/ceph/ceph:v19, name=serene_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 15 10:35:23 compute-0 podman[76323]: 2025-12-15 10:35:23.417468348 +0000 UTC m=+0.023675286 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:23 compute-0 systemd[1]: Started libpod-conmon-b59e99274a92c8d699e89f30b684bcb4711c8e7cb8fb0f5c4b46ada4205bc7c1.scope.
Dec 15 10:35:23 compute-0 systemd[1]: libpod-conmon-ba388b123bc4bbdde84b9bfc32b73db7560481b5c4017869b33a18a09cba0688.scope: Deactivated successfully.
Dec 15 10:35:23 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6b6bd38d75234be6b5875e26648b649fb29b43a870ed03c28250aa4fbe40a7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6b6bd38d75234be6b5875e26648b649fb29b43a870ed03c28250aa4fbe40a7d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6b6bd38d75234be6b5875e26648b649fb29b43a870ed03c28250aa4fbe40a7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:23 compute-0 podman[76323]: 2025-12-15 10:35:23.631670829 +0000 UTC m=+0.237877767 container init b59e99274a92c8d699e89f30b684bcb4711c8e7cb8fb0f5c4b46ada4205bc7c1 (image=quay.io/ceph/ceph:v19, name=serene_wozniak, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:35:23 compute-0 ceph-mon[74356]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:35:23 compute-0 ceph-mon[74356]: Saving service mgr spec with placement count:2
Dec 15 10:35:23 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:23 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:23 compute-0 podman[76323]: 2025-12-15 10:35:23.638502951 +0000 UTC m=+0.244709859 container start b59e99274a92c8d699e89f30b684bcb4711c8e7cb8fb0f5c4b46ada4205bc7c1 (image=quay.io/ceph/ceph:v19, name=serene_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 15 10:35:23 compute-0 podman[76323]: 2025-12-15 10:35:23.646597262 +0000 UTC m=+0.252804190 container attach b59e99274a92c8d699e89f30b684bcb4711c8e7cb8fb0f5c4b46ada4205bc7c1 (image=quay.io/ceph/ceph:v19, name=serene_wozniak, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 15 10:35:23 compute-0 sudo[76300]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:23 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:35:23 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:23 compute-0 sudo[76384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:35:23 compute-0 sudo[76384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:23 compute-0 sudo[76384]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:23 compute-0 sudo[76409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 15 10:35:23 compute-0 sudo[76409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:24 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:35:24 compute-0 ceph-mgr[74651]: [cephadm INFO root] Saving service crash spec with placement *
Dec 15 10:35:24 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Dec 15 10:35:24 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 15 10:35:24 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:24 compute-0 serene_wozniak[76341]: Scheduled crash update...
Dec 15 10:35:24 compute-0 systemd[1]: libpod-b59e99274a92c8d699e89f30b684bcb4711c8e7cb8fb0f5c4b46ada4205bc7c1.scope: Deactivated successfully.
Dec 15 10:35:24 compute-0 podman[76323]: 2025-12-15 10:35:24.32029699 +0000 UTC m=+0.926503918 container died b59e99274a92c8d699e89f30b684bcb4711c8e7cb8fb0f5c4b46ada4205bc7c1 (image=quay.io/ceph/ceph:v19, name=serene_wozniak, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 15 10:35:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6b6bd38d75234be6b5875e26648b649fb29b43a870ed03c28250aa4fbe40a7d-merged.mount: Deactivated successfully.
Dec 15 10:35:24 compute-0 podman[76323]: 2025-12-15 10:35:24.552139359 +0000 UTC m=+1.158346267 container remove b59e99274a92c8d699e89f30b684bcb4711c8e7cb8fb0f5c4b46ada4205bc7c1 (image=quay.io/ceph/ceph:v19, name=serene_wozniak, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:35:24 compute-0 systemd[1]: libpod-conmon-b59e99274a92c8d699e89f30b684bcb4711c8e7cb8fb0f5c4b46ada4205bc7c1.scope: Deactivated successfully.
Dec 15 10:35:24 compute-0 ceph-mgr[74651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 15 10:35:24 compute-0 podman[76516]: 2025-12-15 10:35:24.638079703 +0000 UTC m=+0.059131825 container create 7d33e0f739b41fde90fc1d2b2225e5c89d8067567c345cd3f20d3a3ba7feab58 (image=quay.io/ceph/ceph:v19, name=magical_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 15 10:35:24 compute-0 podman[76518]: 2025-12-15 10:35:24.656289477 +0000 UTC m=+0.071083454 container exec 79ed8dc51b1852fd831764c2817cfa3745ae4937bc9015bdb965e376ab3cc58d (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 15 10:35:24 compute-0 systemd[1]: Started libpod-conmon-7d33e0f739b41fde90fc1d2b2225e5c89d8067567c345cd3f20d3a3ba7feab58.scope.
Dec 15 10:35:24 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:24 compute-0 podman[76516]: 2025-12-15 10:35:24.609393603 +0000 UTC m=+0.030445745 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca93d8476df998be99e7ba4975dd1752fd96b2981fa6764b192784e6c492d3c9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca93d8476df998be99e7ba4975dd1752fd96b2981fa6764b192784e6c492d3c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca93d8476df998be99e7ba4975dd1752fd96b2981fa6764b192784e6c492d3c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:24 compute-0 podman[76516]: 2025-12-15 10:35:24.722503751 +0000 UTC m=+0.143555863 container init 7d33e0f739b41fde90fc1d2b2225e5c89d8067567c345cd3f20d3a3ba7feab58 (image=quay.io/ceph/ceph:v19, name=magical_mendel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid)
Dec 15 10:35:24 compute-0 podman[76516]: 2025-12-15 10:35:24.729035563 +0000 UTC m=+0.150087675 container start 7d33e0f739b41fde90fc1d2b2225e5c89d8067567c345cd3f20d3a3ba7feab58 (image=quay.io/ceph/ceph:v19, name=magical_mendel, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 15 10:35:24 compute-0 podman[76516]: 2025-12-15 10:35:24.733632706 +0000 UTC m=+0.154684838 container attach 7d33e0f739b41fde90fc1d2b2225e5c89d8067567c345cd3f20d3a3ba7feab58 (image=quay.io/ceph/ceph:v19, name=magical_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:35:24 compute-0 podman[76518]: 2025-12-15 10:35:24.771805919 +0000 UTC m=+0.186599896 container exec_died 79ed8dc51b1852fd831764c2817cfa3745ae4937bc9015bdb965e376ab3cc58d (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 15 10:35:24 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:24 compute-0 ceph-mon[74356]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:35:24 compute-0 ceph-mon[74356]: Saving service crash spec with placement *
Dec 15 10:35:24 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:24 compute-0 sudo[76409]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:24 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:35:24 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:24 compute-0 sudo[76603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:35:24 compute-0 sudo[76603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:24 compute-0 sudo[76603]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:25 compute-0 sudo[76628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 15 10:35:25 compute-0 sudo[76628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:25 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Dec 15 10:35:25 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/738589213' entity='client.admin' 
Dec 15 10:35:25 compute-0 systemd[1]: libpod-7d33e0f739b41fde90fc1d2b2225e5c89d8067567c345cd3f20d3a3ba7feab58.scope: Deactivated successfully.
Dec 15 10:35:25 compute-0 podman[76516]: 2025-12-15 10:35:25.136584809 +0000 UTC m=+0.557636921 container died 7d33e0f739b41fde90fc1d2b2225e5c89d8067567c345cd3f20d3a3ba7feab58 (image=quay.io/ceph/ceph:v19, name=magical_mendel, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 15 10:35:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca93d8476df998be99e7ba4975dd1752fd96b2981fa6764b192784e6c492d3c9-merged.mount: Deactivated successfully.
Dec 15 10:35:25 compute-0 podman[76516]: 2025-12-15 10:35:25.184418392 +0000 UTC m=+0.605470504 container remove 7d33e0f739b41fde90fc1d2b2225e5c89d8067567c345cd3f20d3a3ba7feab58 (image=quay.io/ceph/ceph:v19, name=magical_mendel, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 15 10:35:25 compute-0 systemd[1]: libpod-conmon-7d33e0f739b41fde90fc1d2b2225e5c89d8067567c345cd3f20d3a3ba7feab58.scope: Deactivated successfully.
Dec 15 10:35:25 compute-0 podman[76666]: 2025-12-15 10:35:25.245290579 +0000 UTC m=+0.039035761 container create 57412db236bf1d647d038cadde2c409494c706fbfc80859da82d9412df8e5b63 (image=quay.io/ceph/ceph:v19, name=frosty_hermann, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 15 10:35:25 compute-0 systemd[1]: Started libpod-conmon-57412db236bf1d647d038cadde2c409494c706fbfc80859da82d9412df8e5b63.scope.
Dec 15 10:35:25 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:25 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 76698 (sysctl)
Dec 15 10:35:25 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec 15 10:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcde2201655815494d9eb8d2c89981b3c7dcc0a5221b8e314d4aa1fb88cf89a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcde2201655815494d9eb8d2c89981b3c7dcc0a5221b8e314d4aa1fb88cf89a6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcde2201655815494d9eb8d2c89981b3c7dcc0a5221b8e314d4aa1fb88cf89a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:25 compute-0 podman[76666]: 2025-12-15 10:35:25.312050589 +0000 UTC m=+0.105795771 container init 57412db236bf1d647d038cadde2c409494c706fbfc80859da82d9412df8e5b63 (image=quay.io/ceph/ceph:v19, name=frosty_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:35:25 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec 15 10:35:25 compute-0 podman[76666]: 2025-12-15 10:35:25.3188228 +0000 UTC m=+0.112567982 container start 57412db236bf1d647d038cadde2c409494c706fbfc80859da82d9412df8e5b63 (image=quay.io/ceph/ceph:v19, name=frosty_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:35:25 compute-0 podman[76666]: 2025-12-15 10:35:25.322004978 +0000 UTC m=+0.115750190 container attach 57412db236bf1d647d038cadde2c409494c706fbfc80859da82d9412df8e5b63 (image=quay.io/ceph/ceph:v19, name=frosty_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 15 10:35:25 compute-0 podman[76666]: 2025-12-15 10:35:25.228129457 +0000 UTC m=+0.021874669 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:25 compute-0 sudo[76628]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:25 compute-0 sudo[76741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:35:25 compute-0 sudo[76741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:25 compute-0 sudo[76741]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:25 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:35:25 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Dec 15 10:35:25 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:25 compute-0 systemd[1]: libpod-57412db236bf1d647d038cadde2c409494c706fbfc80859da82d9412df8e5b63.scope: Deactivated successfully.
Dec 15 10:35:25 compute-0 podman[76666]: 2025-12-15 10:35:25.72038747 +0000 UTC m=+0.514132652 container died 57412db236bf1d647d038cadde2c409494c706fbfc80859da82d9412df8e5b63 (image=quay.io/ceph/ceph:v19, name=frosty_hermann, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:35:25 compute-0 sudo[76766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 15 10:35:25 compute-0 sudo[76766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcde2201655815494d9eb8d2c89981b3c7dcc0a5221b8e314d4aa1fb88cf89a6-merged.mount: Deactivated successfully.
Dec 15 10:35:25 compute-0 podman[76666]: 2025-12-15 10:35:25.765030684 +0000 UTC m=+0.558775866 container remove 57412db236bf1d647d038cadde2c409494c706fbfc80859da82d9412df8e5b63 (image=quay.io/ceph/ceph:v19, name=frosty_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 15 10:35:25 compute-0 systemd[1]: libpod-conmon-57412db236bf1d647d038cadde2c409494c706fbfc80859da82d9412df8e5b63.scope: Deactivated successfully.
Dec 15 10:35:25 compute-0 podman[76805]: 2025-12-15 10:35:25.834422925 +0000 UTC m=+0.046924986 container create 2e2cff507ff30102756d75b6c224635e2f06f7ca4e2b0fd7123ee42daefc22b2 (image=quay.io/ceph/ceph:v19, name=sad_tu, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:35:25 compute-0 systemd[1]: Started libpod-conmon-2e2cff507ff30102756d75b6c224635e2f06f7ca4e2b0fd7123ee42daefc22b2.scope.
Dec 15 10:35:25 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc3f811cb8e7899c6f0b02bf180e5eb12b1fb047f3e4179f92e9637ac4d66149/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc3f811cb8e7899c6f0b02bf180e5eb12b1fb047f3e4179f92e9637ac4d66149/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc3f811cb8e7899c6f0b02bf180e5eb12b1fb047f3e4179f92e9637ac4d66149/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:25 compute-0 podman[76805]: 2025-12-15 10:35:25.815408776 +0000 UTC m=+0.027910847 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:25 compute-0 podman[76805]: 2025-12-15 10:35:25.921841566 +0000 UTC m=+0.134343627 container init 2e2cff507ff30102756d75b6c224635e2f06f7ca4e2b0fd7123ee42daefc22b2 (image=quay.io/ceph/ceph:v19, name=sad_tu, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 15 10:35:25 compute-0 podman[76805]: 2025-12-15 10:35:25.926987785 +0000 UTC m=+0.139489846 container start 2e2cff507ff30102756d75b6c224635e2f06f7ca4e2b0fd7123ee42daefc22b2 (image=quay.io/ceph/ceph:v19, name=sad_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:35:25 compute-0 podman[76805]: 2025-12-15 10:35:25.930614388 +0000 UTC m=+0.143116449 container attach 2e2cff507ff30102756d75b6c224635e2f06f7ca4e2b0fd7123ee42daefc22b2 (image=quay.io/ceph/ceph:v19, name=sad_tu, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 15 10:35:25 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:25 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/738589213' entity='client.admin' 
Dec 15 10:35:25 compute-0 ceph-mon[74356]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:35:25 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:26 compute-0 sudo[76766]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:26 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:35:26 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:26 compute-0 sudo[76851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:35:26 compute-0 sudo[76851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:26 compute-0 sudo[76851]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:26 compute-0 sudo[76888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 77365f67-614e-5a8d-b658-640395550c79 -- inventory --format=json-pretty --filter-for-batch
Dec 15 10:35:26 compute-0 sudo[76888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:26 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:35:26 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:35:26 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 15 10:35:26 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:26 compute-0 ceph-mgr[74651]: [cephadm INFO root] Added label _admin to host compute-0
Dec 15 10:35:26 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Dec 15 10:35:26 compute-0 sad_tu[76821]: Added label _admin to host compute-0
Dec 15 10:35:26 compute-0 systemd[1]: libpod-2e2cff507ff30102756d75b6c224635e2f06f7ca4e2b0fd7123ee42daefc22b2.scope: Deactivated successfully.
Dec 15 10:35:26 compute-0 podman[76805]: 2025-12-15 10:35:26.356959007 +0000 UTC m=+0.569461068 container died 2e2cff507ff30102756d75b6c224635e2f06f7ca4e2b0fd7123ee42daefc22b2 (image=quay.io/ceph/ceph:v19, name=sad_tu, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 15 10:35:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc3f811cb8e7899c6f0b02bf180e5eb12b1fb047f3e4179f92e9637ac4d66149-merged.mount: Deactivated successfully.
Dec 15 10:35:26 compute-0 podman[76805]: 2025-12-15 10:35:26.396467732 +0000 UTC m=+0.608969793 container remove 2e2cff507ff30102756d75b6c224635e2f06f7ca4e2b0fd7123ee42daefc22b2 (image=quay.io/ceph/ceph:v19, name=sad_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 15 10:35:26 compute-0 systemd[1]: libpod-conmon-2e2cff507ff30102756d75b6c224635e2f06f7ca4e2b0fd7123ee42daefc22b2.scope: Deactivated successfully.
Dec 15 10:35:26 compute-0 podman[76948]: 2025-12-15 10:35:26.468099313 +0000 UTC m=+0.047154143 container create 17d6288a3473805e988a2361de2ee327955a2527f884b2de31f27a7c17c7da42 (image=quay.io/ceph/ceph:v19, name=vigilant_einstein, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 15 10:35:26 compute-0 systemd[1]: Started libpod-conmon-17d6288a3473805e988a2361de2ee327955a2527f884b2de31f27a7c17c7da42.scope.
Dec 15 10:35:26 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fa1835d58596b4e6d72a1316dc9a3fa1cf12bc0516b05cc83c22eddeaa3e015/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fa1835d58596b4e6d72a1316dc9a3fa1cf12bc0516b05cc83c22eddeaa3e015/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fa1835d58596b4e6d72a1316dc9a3fa1cf12bc0516b05cc83c22eddeaa3e015/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:26 compute-0 podman[76948]: 2025-12-15 10:35:26.446768262 +0000 UTC m=+0.025823112 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:26 compute-0 podman[76948]: 2025-12-15 10:35:26.559327302 +0000 UTC m=+0.138382162 container init 17d6288a3473805e988a2361de2ee327955a2527f884b2de31f27a7c17c7da42 (image=quay.io/ceph/ceph:v19, name=vigilant_einstein, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:35:26 compute-0 podman[76948]: 2025-12-15 10:35:26.565155452 +0000 UTC m=+0.144210282 container start 17d6288a3473805e988a2361de2ee327955a2527f884b2de31f27a7c17c7da42 (image=quay.io/ceph/ceph:v19, name=vigilant_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 15 10:35:26 compute-0 podman[76948]: 2025-12-15 10:35:26.569712903 +0000 UTC m=+0.148767813 container attach 17d6288a3473805e988a2361de2ee327955a2527f884b2de31f27a7c17c7da42 (image=quay.io/ceph/ceph:v19, name=vigilant_einstein, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 15 10:35:26 compute-0 podman[76987]: 2025-12-15 10:35:26.629529907 +0000 UTC m=+0.038046380 container create 03cd6050bfb46648048cb00d5c2f76bed92ca295856e74cdbf32f5c8d1a914d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_hopper, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 15 10:35:26 compute-0 ceph-mgr[74651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 15 10:35:26 compute-0 systemd[1]: Started libpod-conmon-03cd6050bfb46648048cb00d5c2f76bed92ca295856e74cdbf32f5c8d1a914d8.scope.
Dec 15 10:35:26 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:26 compute-0 podman[76987]: 2025-12-15 10:35:26.694461301 +0000 UTC m=+0.102977804 container init 03cd6050bfb46648048cb00d5c2f76bed92ca295856e74cdbf32f5c8d1a914d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 15 10:35:26 compute-0 podman[76987]: 2025-12-15 10:35:26.70055657 +0000 UTC m=+0.109073043 container start 03cd6050bfb46648048cb00d5c2f76bed92ca295856e74cdbf32f5c8d1a914d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_hopper, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 15 10:35:26 compute-0 crazy_hopper[77005]: 167 167
Dec 15 10:35:26 compute-0 systemd[1]: libpod-03cd6050bfb46648048cb00d5c2f76bed92ca295856e74cdbf32f5c8d1a914d8.scope: Deactivated successfully.
Dec 15 10:35:26 compute-0 podman[76987]: 2025-12-15 10:35:26.704967017 +0000 UTC m=+0.113483530 container attach 03cd6050bfb46648048cb00d5c2f76bed92ca295856e74cdbf32f5c8d1a914d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_hopper, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:35:26 compute-0 podman[76987]: 2025-12-15 10:35:26.7053917 +0000 UTC m=+0.113908173 container died 03cd6050bfb46648048cb00d5c2f76bed92ca295856e74cdbf32f5c8d1a914d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_hopper, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:35:26 compute-0 podman[76987]: 2025-12-15 10:35:26.613542862 +0000 UTC m=+0.022059355 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:35:26 compute-0 podman[76987]: 2025-12-15 10:35:26.737508736 +0000 UTC m=+0.146025219 container remove 03cd6050bfb46648048cb00d5c2f76bed92ca295856e74cdbf32f5c8d1a914d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_hopper, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 15 10:35:26 compute-0 systemd[1]: libpod-conmon-03cd6050bfb46648048cb00d5c2f76bed92ca295856e74cdbf32f5c8d1a914d8.scope: Deactivated successfully.
Dec 15 10:35:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-2af8c78ada41c4a152907ffb5a8c45c722ced5eab291926493592fe3f2a80011-merged.mount: Deactivated successfully.
Dec 15 10:35:27 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:27 compute-0 ceph-mon[74356]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:35:27 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:27 compute-0 ceph-mon[74356]: Added label _admin to host compute-0
Dec 15 10:35:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Dec 15 10:35:27 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4169649022' entity='client.admin' 
Dec 15 10:35:27 compute-0 vigilant_einstein[76982]: set mgr/dashboard/cluster/status
Dec 15 10:35:27 compute-0 systemd[1]: libpod-17d6288a3473805e988a2361de2ee327955a2527f884b2de31f27a7c17c7da42.scope: Deactivated successfully.
Dec 15 10:35:27 compute-0 podman[76948]: 2025-12-15 10:35:27.087309732 +0000 UTC m=+0.666364572 container died 17d6288a3473805e988a2361de2ee327955a2527f884b2de31f27a7c17c7da42 (image=quay.io/ceph/ceph:v19, name=vigilant_einstein, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 15 10:35:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fa1835d58596b4e6d72a1316dc9a3fa1cf12bc0516b05cc83c22eddeaa3e015-merged.mount: Deactivated successfully.
Dec 15 10:35:27 compute-0 podman[76948]: 2025-12-15 10:35:27.129759138 +0000 UTC m=+0.708813968 container remove 17d6288a3473805e988a2361de2ee327955a2527f884b2de31f27a7c17c7da42 (image=quay.io/ceph/ceph:v19, name=vigilant_einstein, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:35:27 compute-0 systemd[1]: libpod-conmon-17d6288a3473805e988a2361de2ee327955a2527f884b2de31f27a7c17c7da42.scope: Deactivated successfully.
Dec 15 10:35:27 compute-0 sudo[73311]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:27 compute-0 podman[77059]: 2025-12-15 10:35:27.295386123 +0000 UTC m=+0.040302141 container create f8383b3c7d4c5a0a7063962f192ac73affa1fa5141c599a2b877c21f03555f89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:35:27 compute-0 systemd[1]: Started libpod-conmon-f8383b3c7d4c5a0a7063962f192ac73affa1fa5141c599a2b877c21f03555f89.scope.
Dec 15 10:35:27 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d94eb3e595ebcdb5abf1194503d41bd643cbe1d76aedda5ab81e75ae32afe261/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d94eb3e595ebcdb5abf1194503d41bd643cbe1d76aedda5ab81e75ae32afe261/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d94eb3e595ebcdb5abf1194503d41bd643cbe1d76aedda5ab81e75ae32afe261/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d94eb3e595ebcdb5abf1194503d41bd643cbe1d76aedda5ab81e75ae32afe261/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:27 compute-0 podman[77059]: 2025-12-15 10:35:27.278490419 +0000 UTC m=+0.023406457 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:35:27 compute-0 podman[77059]: 2025-12-15 10:35:27.382024129 +0000 UTC m=+0.126940197 container init f8383b3c7d4c5a0a7063962f192ac73affa1fa5141c599a2b877c21f03555f89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:35:27 compute-0 podman[77059]: 2025-12-15 10:35:27.389584274 +0000 UTC m=+0.134500302 container start f8383b3c7d4c5a0a7063962f192ac73affa1fa5141c599a2b877c21f03555f89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True)
Dec 15 10:35:27 compute-0 podman[77059]: 2025-12-15 10:35:27.392761632 +0000 UTC m=+0.137677650 container attach f8383b3c7d4c5a0a7063962f192ac73affa1fa5141c599a2b877c21f03555f89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_aryabhata, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:35:27 compute-0 sudo[77103]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzkfnaynckoebvfrgzuwynootjlmkmrh ; /usr/bin/python3'
Dec 15 10:35:27 compute-0 sudo[77103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:35:27 compute-0 python3[77105]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:35:27 compute-0 podman[77112]: 2025-12-15 10:35:27.764306682 +0000 UTC m=+0.046009157 container create 08fe156c0438e47238c926544d87085673e2542d04616833df0db3a5a21b6a32 (image=quay.io/ceph/ceph:v19, name=nice_feistel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:35:27 compute-0 systemd[1]: Started libpod-conmon-08fe156c0438e47238c926544d87085673e2542d04616833df0db3a5a21b6a32.scope.
Dec 15 10:35:27 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/963c106ed032d5f748879809c2fb8a8c0335c366d97224ab8e1c7ee6a5f47a8d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/963c106ed032d5f748879809c2fb8a8c0335c366d97224ab8e1c7ee6a5f47a8d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:27 compute-0 podman[77112]: 2025-12-15 10:35:27.741863996 +0000 UTC m=+0.023566511 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:27 compute-0 podman[77112]: 2025-12-15 10:35:27.838431271 +0000 UTC m=+0.120133766 container init 08fe156c0438e47238c926544d87085673e2542d04616833df0db3a5a21b6a32 (image=quay.io/ceph/ceph:v19, name=nice_feistel, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:35:27 compute-0 podman[77112]: 2025-12-15 10:35:27.845258852 +0000 UTC m=+0.126961327 container start 08fe156c0438e47238c926544d87085673e2542d04616833df0db3a5a21b6a32 (image=quay.io/ceph/ceph:v19, name=nice_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:35:27 compute-0 podman[77112]: 2025-12-15 10:35:27.853786966 +0000 UTC m=+0.135489461 container attach 08fe156c0438e47238c926544d87085673e2542d04616833df0db3a5a21b6a32 (image=quay.io/ceph/ceph:v19, name=nice_feistel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 15 10:35:28 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/4169649022' entity='client.admin' 
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]: [
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:     {
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:         "available": false,
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:         "being_replaced": false,
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:         "ceph_device_lvm": false,
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:         "lsm_data": {},
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:         "lvs": [],
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:         "path": "/dev/sr0",
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:         "rejected_reasons": [
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:             "Has a FileSystem",
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:             "Insufficient space (<5GB)"
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:         ],
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:         "sys_api": {
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:             "actuators": null,
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:             "device_nodes": [
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:                 "sr0"
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:             ],
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:             "devname": "sr0",
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:             "human_readable_size": "482.00 KB",
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:             "id_bus": "ata",
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:             "model": "QEMU DVD-ROM",
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:             "nr_requests": "2",
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:             "parent": "/dev/sr0",
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:             "partitions": {},
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:             "path": "/dev/sr0",
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:             "removable": "1",
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:             "rev": "2.5+",
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:             "ro": "0",
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:             "rotational": "1",
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:             "sas_address": "",
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:             "sas_device_handle": "",
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:             "scheduler_mode": "mq-deadline",
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:             "sectors": 0,
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:             "sectorsize": "2048",
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:             "size": 493568.0,
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:             "support_discard": "2048",
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:             "type": "disk",
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:             "vendor": "QEMU"
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:         }
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]:     }
Dec 15 10:35:28 compute-0 adoring_aryabhata[77075]: ]
Dec 15 10:35:28 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Dec 15 10:35:28 compute-0 systemd[1]: libpod-f8383b3c7d4c5a0a7063962f192ac73affa1fa5141c599a2b877c21f03555f89.scope: Deactivated successfully.
Dec 15 10:35:28 compute-0 podman[77059]: 2025-12-15 10:35:28.205828631 +0000 UTC m=+0.950744649 container died f8383b3c7d4c5a0a7063962f192ac73affa1fa5141c599a2b877c21f03555f89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 15 10:35:28 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2709180897' entity='client.admin' 
Dec 15 10:35:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-d94eb3e595ebcdb5abf1194503d41bd643cbe1d76aedda5ab81e75ae32afe261-merged.mount: Deactivated successfully.
Dec 15 10:35:28 compute-0 systemd[1]: libpod-08fe156c0438e47238c926544d87085673e2542d04616833df0db3a5a21b6a32.scope: Deactivated successfully.
Dec 15 10:35:28 compute-0 podman[77112]: 2025-12-15 10:35:28.25932351 +0000 UTC m=+0.541025985 container died 08fe156c0438e47238c926544d87085673e2542d04616833df0db3a5a21b6a32 (image=quay.io/ceph/ceph:v19, name=nice_feistel, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 15 10:35:28 compute-0 podman[77059]: 2025-12-15 10:35:28.28479701 +0000 UTC m=+1.029713028 container remove f8383b3c7d4c5a0a7063962f192ac73affa1fa5141c599a2b877c21f03555f89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_aryabhata, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 15 10:35:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-963c106ed032d5f748879809c2fb8a8c0335c366d97224ab8e1c7ee6a5f47a8d-merged.mount: Deactivated successfully.
Dec 15 10:35:28 compute-0 systemd[1]: libpod-conmon-f8383b3c7d4c5a0a7063962f192ac73affa1fa5141c599a2b877c21f03555f89.scope: Deactivated successfully.
Dec 15 10:35:28 compute-0 podman[77112]: 2025-12-15 10:35:28.315052387 +0000 UTC m=+0.596754862 container remove 08fe156c0438e47238c926544d87085673e2542d04616833df0db3a5a21b6a32 (image=quay.io/ceph/ceph:v19, name=nice_feistel, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 15 10:35:28 compute-0 systemd[1]: libpod-conmon-08fe156c0438e47238c926544d87085673e2542d04616833df0db3a5a21b6a32.scope: Deactivated successfully.
Dec 15 10:35:28 compute-0 sudo[76888]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:28 compute-0 sudo[77103]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:28 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:35:28 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:28 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:35:28 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:28 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:35:28 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:28 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:35:28 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:28 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 15 10:35:28 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 15 10:35:28 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:35:28 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:35:28 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 15 10:35:28 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 15 10:35:28 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 15 10:35:28 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 15 10:35:28 compute-0 sudo[78311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 15 10:35:28 compute-0 sudo[78311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:28 compute-0 sudo[78311]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:28 compute-0 sudo[78336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph
Dec 15 10:35:28 compute-0 sudo[78336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:28 compute-0 sudo[78336]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:28 compute-0 sudo[78361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.conf.new
Dec 15 10:35:28 compute-0 sudo[78361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:28 compute-0 sudo[78361]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:28 compute-0 sudo[78386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:35:28 compute-0 sudo[78386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:28 compute-0 sudo[78386]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:28 compute-0 ceph-mgr[74651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 15 10:35:28 compute-0 sudo[78411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.conf.new
Dec 15 10:35:28 compute-0 sudo[78411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:28 compute-0 sudo[78411]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:28 compute-0 sudo[78494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.conf.new
Dec 15 10:35:28 compute-0 sudo[78494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:28 compute-0 sudo[78494]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:28 compute-0 sudo[78541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.conf.new
Dec 15 10:35:28 compute-0 sudo[78541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:28 compute-0 sudo[78541]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:28 compute-0 sudo[78584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 15 10:35:28 compute-0 sudo[78584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:28 compute-0 sudo[78584]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:28 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:35:28 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:35:28 compute-0 sudo[78609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config
Dec 15 10:35:28 compute-0 sudo[78609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:28 compute-0 sudo[78609]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:29 compute-0 sudo[78634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config
Dec 15 10:35:29 compute-0 sudo[78634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:29 compute-0 sudo[78634]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:29 compute-0 sudo[78672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf.new
Dec 15 10:35:29 compute-0 sudo[78672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:29 compute-0 sudo[78672]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:29 compute-0 sudo[78720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:35:29 compute-0 sudo[78720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:29 compute-0 sudo[78720]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:29 compute-0 sudo[78793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-najvffltgueyzipxcundruajguwisaej ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765794928.7076545-37132-234857304508201/async_wrapper.py j17617050876 30 /home/zuul/.ansible/tmp/ansible-tmp-1765794928.7076545-37132-234857304508201/AnsiballZ_command.py _'
Dec 15 10:35:29 compute-0 sudo[78793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:35:29 compute-0 sudo[78770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf.new
Dec 15 10:35:29 compute-0 sudo[78770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:29 compute-0 sudo[78770]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:29 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/2709180897' entity='client.admin' 
Dec 15 10:35:29 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:29 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:29 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:29 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:29 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 15 10:35:29 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:35:29 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 15 10:35:29 compute-0 ceph-mon[74356]: Updating compute-0:/etc/ceph/ceph.conf
Dec 15 10:35:29 compute-0 sudo[78832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf.new
Dec 15 10:35:29 compute-0 sudo[78832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:29 compute-0 sudo[78832]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:29 compute-0 ansible-async_wrapper.py[78806]: Invoked with j17617050876 30 /home/zuul/.ansible/tmp/ansible-tmp-1765794928.7076545-37132-234857304508201/AnsiballZ_command.py _
Dec 15 10:35:29 compute-0 ansible-async_wrapper.py[78870]: Starting module and watcher
Dec 15 10:35:29 compute-0 ansible-async_wrapper.py[78870]: Start watching 78873 (30)
Dec 15 10:35:29 compute-0 ansible-async_wrapper.py[78873]: Start module (78873)
Dec 15 10:35:29 compute-0 ansible-async_wrapper.py[78806]: Return async_wrapper task started.
Dec 15 10:35:29 compute-0 sudo[78793]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:29 compute-0 sudo[78857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf.new
Dec 15 10:35:29 compute-0 sudo[78857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:29 compute-0 sudo[78857]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:29 compute-0 sudo[78887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf.new /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:35:29 compute-0 sudo[78887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:29 compute-0 sudo[78887]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:29 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:35:29 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:35:29 compute-0 python3[78879]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:35:29 compute-0 sudo[78912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 15 10:35:29 compute-0 sudo[78912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:29 compute-0 sudo[78912]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:29 compute-0 podman[78936]: 2025-12-15 10:35:29.54070859 +0000 UTC m=+0.040774726 container create c1c0ec32255b4539ee8602cea8c61246cfd2f95d5951305c37dc33e6701a610d (image=quay.io/ceph/ceph:v19, name=pedantic_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:35:29 compute-0 sudo[78943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph
Dec 15 10:35:29 compute-0 sudo[78943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:29 compute-0 sudo[78943]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:29 compute-0 systemd[1]: Started libpod-conmon-c1c0ec32255b4539ee8602cea8c61246cfd2f95d5951305c37dc33e6701a610d.scope.
Dec 15 10:35:29 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:29 compute-0 sudo[78978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.client.admin.keyring.new
Dec 15 10:35:29 compute-0 sudo[78978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97eaa8bde77913ac080251e9bc973b7e830fe5acb9ebbcdae8f27ff474e8ce96/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97eaa8bde77913ac080251e9bc973b7e830fe5acb9ebbcdae8f27ff474e8ce96/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:29 compute-0 podman[78936]: 2025-12-15 10:35:29.523472975 +0000 UTC m=+0.023539141 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:29 compute-0 sudo[78978]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:29 compute-0 podman[78936]: 2025-12-15 10:35:29.628341686 +0000 UTC m=+0.128407842 container init c1c0ec32255b4539ee8602cea8c61246cfd2f95d5951305c37dc33e6701a610d (image=quay.io/ceph/ceph:v19, name=pedantic_yonath, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:35:29 compute-0 podman[78936]: 2025-12-15 10:35:29.636110928 +0000 UTC m=+0.136177064 container start c1c0ec32255b4539ee8602cea8c61246cfd2f95d5951305c37dc33e6701a610d (image=quay.io/ceph/ceph:v19, name=pedantic_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:35:29 compute-0 podman[78936]: 2025-12-15 10:35:29.640018919 +0000 UTC m=+0.140085085 container attach c1c0ec32255b4539ee8602cea8c61246cfd2f95d5951305c37dc33e6701a610d (image=quay.io/ceph/ceph:v19, name=pedantic_yonath, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:35:29 compute-0 sudo[79007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:35:29 compute-0 sudo[79007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:29 compute-0 sudo[79007]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:29 compute-0 sudo[79032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.client.admin.keyring.new
Dec 15 10:35:29 compute-0 sudo[79032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:29 compute-0 sudo[79032]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:29 compute-0 sudo[79099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.client.admin.keyring.new
Dec 15 10:35:29 compute-0 sudo[79099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:29 compute-0 sudo[79099]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:29 compute-0 sudo[79124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.client.admin.keyring.new
Dec 15 10:35:29 compute-0 sudo[79124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:29 compute-0 sudo[79124]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:29 compute-0 sudo[79149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 15 10:35:29 compute-0 sudo[79149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:29 compute-0 sudo[79149]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:29 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:35:29 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:35:30 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 15 10:35:30 compute-0 sudo[79174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config
Dec 15 10:35:30 compute-0 pedantic_yonath[78991]: 
Dec 15 10:35:30 compute-0 pedantic_yonath[78991]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 15 10:35:30 compute-0 sudo[79174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:30 compute-0 sudo[79174]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:30 compute-0 systemd[1]: libpod-c1c0ec32255b4539ee8602cea8c61246cfd2f95d5951305c37dc33e6701a610d.scope: Deactivated successfully.
Dec 15 10:35:30 compute-0 podman[78936]: 2025-12-15 10:35:30.019754093 +0000 UTC m=+0.519820229 container died c1c0ec32255b4539ee8602cea8c61246cfd2f95d5951305c37dc33e6701a610d (image=quay.io/ceph/ceph:v19, name=pedantic_yonath, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:35:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-97eaa8bde77913ac080251e9bc973b7e830fe5acb9ebbcdae8f27ff474e8ce96-merged.mount: Deactivated successfully.
Dec 15 10:35:30 compute-0 podman[78936]: 2025-12-15 10:35:30.055304605 +0000 UTC m=+0.555370741 container remove c1c0ec32255b4539ee8602cea8c61246cfd2f95d5951305c37dc33e6701a610d (image=quay.io/ceph/ceph:v19, name=pedantic_yonath, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:35:30 compute-0 systemd[1]: libpod-conmon-c1c0ec32255b4539ee8602cea8c61246cfd2f95d5951305c37dc33e6701a610d.scope: Deactivated successfully.
Dec 15 10:35:30 compute-0 sudo[79201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config
Dec 15 10:35:30 compute-0 sudo[79201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:30 compute-0 ansible-async_wrapper.py[78873]: Module complete (78873)
Dec 15 10:35:30 compute-0 sudo[79201]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:30 compute-0 sudo[79236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring.new
Dec 15 10:35:30 compute-0 sudo[79236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:30 compute-0 sudo[79236]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:30 compute-0 sudo[79261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:35:30 compute-0 sudo[79261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:30 compute-0 sudo[79261]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:30 compute-0 ceph-mon[74356]: Updating compute-0:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:35:30 compute-0 ceph-mon[74356]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:35:30 compute-0 sudo[79286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring.new
Dec 15 10:35:30 compute-0 sudo[79286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:30 compute-0 sudo[79286]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:30 compute-0 sudo[79334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring.new
Dec 15 10:35:30 compute-0 sudo[79334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:30 compute-0 sudo[79334]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:30 compute-0 sudo[79359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring.new
Dec 15 10:35:30 compute-0 sudo[79359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:30 compute-0 sudo[79359]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:30 compute-0 sudo[79407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring.new /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:35:30 compute-0 sudo[79407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:30 compute-0 sudo[79407]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:30 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:35:30 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:30 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:35:30 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:30 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 15 10:35:30 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:30 compute-0 ceph-mgr[74651]: [progress INFO root] update: starting ev 4e0fb021-d0d1-4d2b-b9a7-f95434ea1ea2 (Updating crash deployment (+1 -> 1))
Dec 15 10:35:30 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 15 10:35:30 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 15 10:35:30 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 15 10:35:30 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:35:30 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:35:30 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Dec 15 10:35:30 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Dec 15 10:35:30 compute-0 sudo[79432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:35:30 compute-0 sudo[79432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:30 compute-0 sudo[79432]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:30 compute-0 sudo[79482]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhftbdpvkrbbatksbcjzhjjltrbkwvry ; /usr/bin/python3'
Dec 15 10:35:30 compute-0 sudo[79482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:35:30 compute-0 sudo[79480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:35:30 compute-0 sudo[79480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:30 compute-0 ceph-mgr[74651]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Dec 15 10:35:30 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:30 compute-0 ceph-mon[74356]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 15 10:35:30 compute-0 python3[79499]: ansible-ansible.legacy.async_status Invoked with jid=j17617050876.78806 mode=status _async_dir=/root/.ansible_async
Dec 15 10:35:30 compute-0 sudo[79482]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:30 compute-0 sudo[79568]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viqeqgnwhdhxlqsctuydfcksfbrupqhb ; /usr/bin/python3'
Dec 15 10:35:30 compute-0 sudo[79568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:35:31 compute-0 python3[79572]: ansible-ansible.legacy.async_status Invoked with jid=j17617050876.78806 mode=cleanup _async_dir=/root/.ansible_async
Dec 15 10:35:31 compute-0 podman[79597]: 2025-12-15 10:35:31.018299603 +0000 UTC m=+0.047248586 container create b24a173723c39cf5642966705fadeca7d638b65e53517a5bd3b9f3811e1ab16d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_dirac, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:35:31 compute-0 sudo[79568]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:31 compute-0 systemd[1]: Started libpod-conmon-b24a173723c39cf5642966705fadeca7d638b65e53517a5bd3b9f3811e1ab16d.scope.
Dec 15 10:35:31 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:31 compute-0 podman[79597]: 2025-12-15 10:35:30.994964759 +0000 UTC m=+0.023913762 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:35:31 compute-0 podman[79597]: 2025-12-15 10:35:31.094116824 +0000 UTC m=+0.123065817 container init b24a173723c39cf5642966705fadeca7d638b65e53517a5bd3b9f3811e1ab16d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 15 10:35:31 compute-0 podman[79597]: 2025-12-15 10:35:31.09947933 +0000 UTC m=+0.128428303 container start b24a173723c39cf5642966705fadeca7d638b65e53517a5bd3b9f3811e1ab16d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:35:31 compute-0 podman[79597]: 2025-12-15 10:35:31.102433711 +0000 UTC m=+0.131382704 container attach b24a173723c39cf5642966705fadeca7d638b65e53517a5bd3b9f3811e1ab16d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_dirac, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:35:31 compute-0 keen_dirac[79613]: 167 167
Dec 15 10:35:31 compute-0 systemd[1]: libpod-b24a173723c39cf5642966705fadeca7d638b65e53517a5bd3b9f3811e1ab16d.scope: Deactivated successfully.
Dec 15 10:35:31 compute-0 podman[79618]: 2025-12-15 10:35:31.146444466 +0000 UTC m=+0.025558734 container died b24a173723c39cf5642966705fadeca7d638b65e53517a5bd3b9f3811e1ab16d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_dirac, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:35:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-d752dc259fd8dadadf50f0a9bd9385a50cfb7a2eec1cf4b95a7a7a4bd529a968-merged.mount: Deactivated successfully.
Dec 15 10:35:31 compute-0 podman[79618]: 2025-12-15 10:35:31.185970011 +0000 UTC m=+0.065084269 container remove b24a173723c39cf5642966705fadeca7d638b65e53517a5bd3b9f3811e1ab16d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:35:31 compute-0 systemd[1]: libpod-conmon-b24a173723c39cf5642966705fadeca7d638b65e53517a5bd3b9f3811e1ab16d.scope: Deactivated successfully.
Dec 15 10:35:31 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:35:31 compute-0 systemd[1]: Reloading.
Dec 15 10:35:31 compute-0 systemd-rc-local-generator[79660]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:35:31 compute-0 systemd-sysv-generator[79663]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:35:31 compute-0 sudo[79693]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqvclsupnxdbngrpzujhkrxyximeagar ; /usr/bin/python3'
Dec 15 10:35:31 compute-0 sudo[79693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:35:31 compute-0 ceph-mon[74356]: Updating compute-0:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:35:31 compute-0 ceph-mon[74356]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 15 10:35:31 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:31 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:31 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:31 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 15 10:35:31 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 15 10:35:31 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:35:31 compute-0 ceph-mon[74356]: Deploying daemon crash.compute-0 on compute-0
Dec 15 10:35:31 compute-0 ceph-mon[74356]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:31 compute-0 ceph-mon[74356]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 15 10:35:31 compute-0 systemd[1]: Reloading.
Dec 15 10:35:31 compute-0 systemd-sysv-generator[79729]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:35:31 compute-0 systemd-rc-local-generator[79723]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:35:31 compute-0 python3[79697]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 15 10:35:31 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 77365f67-614e-5a8d-b658-640395550c79...
Dec 15 10:35:31 compute-0 sudo[79693]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:32 compute-0 podman[79788]: 2025-12-15 10:35:31.939625849 +0000 UTC m=+0.022739516 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:35:32 compute-0 podman[79788]: 2025-12-15 10:35:32.435823243 +0000 UTC m=+0.518936890 container create 1454e76beb00bd9b0c93ea417abfe9887d25bc76fb7d8bd8eeac40e3c12e72af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-crash-compute-0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 15 10:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44370caabfa42a267b4727415dc6aecc4a88ae0174f43ec98d98330930928fb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44370caabfa42a267b4727415dc6aecc4a88ae0174f43ec98d98330930928fb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44370caabfa42a267b4727415dc6aecc4a88ae0174f43ec98d98330930928fb3/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44370caabfa42a267b4727415dc6aecc4a88ae0174f43ec98d98330930928fb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:32 compute-0 sudo[79827]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vozvwbatgcampdajrtcsrwvlhheybmzs ; /usr/bin/python3'
Dec 15 10:35:32 compute-0 podman[79788]: 2025-12-15 10:35:32.488471896 +0000 UTC m=+0.571585573 container init 1454e76beb00bd9b0c93ea417abfe9887d25bc76fb7d8bd8eeac40e3c12e72af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-crash-compute-0, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:35:32 compute-0 sudo[79827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:35:32 compute-0 podman[79788]: 2025-12-15 10:35:32.495973078 +0000 UTC m=+0.579086725 container start 1454e76beb00bd9b0c93ea417abfe9887d25bc76fb7d8bd8eeac40e3c12e72af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-crash-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:35:32 compute-0 bash[79788]: 1454e76beb00bd9b0c93ea417abfe9887d25bc76fb7d8bd8eeac40e3c12e72af
Dec 15 10:35:32 compute-0 systemd[1]: Started Ceph crash.compute-0 for 77365f67-614e-5a8d-b658-640395550c79.
Dec 15 10:35:32 compute-0 sudo[79480]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:32 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:35:32 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-crash-compute-0[79824]: INFO:ceph-crash:pinging cluster to exercise our key
Dec 15 10:35:32 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:32 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:35:32 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:32 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 15 10:35:32 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:32 compute-0 ceph-mgr[74651]: [progress INFO root] complete: finished ev 4e0fb021-d0d1-4d2b-b9a7-f95434ea1ea2 (Updating crash deployment (+1 -> 1))
Dec 15 10:35:32 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event 4e0fb021-d0d1-4d2b-b9a7-f95434ea1ea2 (Updating crash deployment (+1 -> 1)) in 2 seconds
Dec 15 10:35:32 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 15 10:35:32 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:32 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 15 10:35:32 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:32 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 15 10:35:32 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:32 compute-0 python3[79831]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:35:32 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:32 compute-0 sudo[79835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 15 10:35:32 compute-0 sudo[79835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:32 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-crash-compute-0[79824]: 2025-12-15T10:35:32.642+0000 7f4b5af0c640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 15 10:35:32 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-crash-compute-0[79824]: 2025-12-15T10:35:32.642+0000 7f4b5af0c640 -1 AuthRegistry(0x7f4b540698f0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 15 10:35:32 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-crash-compute-0[79824]: 2025-12-15T10:35:32.644+0000 7f4b5af0c640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 15 10:35:32 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-crash-compute-0[79824]: 2025-12-15T10:35:32.644+0000 7f4b5af0c640 -1 AuthRegistry(0x7f4b5af0aff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 15 10:35:32 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-crash-compute-0[79824]: 2025-12-15T10:35:32.644+0000 7f4b58c81640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 15 10:35:32 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-crash-compute-0[79824]: 2025-12-15T10:35:32.646+0000 7f4b5af0c640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec 15 10:35:32 compute-0 sudo[79835]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:32 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-crash-compute-0[79824]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec 15 10:35:32 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-crash-compute-0[79824]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec 15 10:35:32 compute-0 podman[79858]: 2025-12-15 10:35:32.681281104 +0000 UTC m=+0.041778227 container create ed76cdcca2852703dec88317c389955f9a4885ce72147249a7e6be80a572c1b8 (image=quay.io/ceph/ceph:v19, name=clever_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 15 10:35:32 compute-0 systemd[1]: Started libpod-conmon-ed76cdcca2852703dec88317c389955f9a4885ce72147249a7e6be80a572c1b8.scope.
Dec 15 10:35:32 compute-0 sudo[79882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:35:32 compute-0 sudo[79882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:32 compute-0 sudo[79882]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:32 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e60bfad4c08b34b90f83d3e66e8e7ab9eb4fba9bec10195512d7a159b852df3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e60bfad4c08b34b90f83d3e66e8e7ab9eb4fba9bec10195512d7a159b852df3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e60bfad4c08b34b90f83d3e66e8e7ab9eb4fba9bec10195512d7a159b852df3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:32 compute-0 podman[79858]: 2025-12-15 10:35:32.664262506 +0000 UTC m=+0.024759659 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:32 compute-0 podman[79858]: 2025-12-15 10:35:32.763332988 +0000 UTC m=+0.123830141 container init ed76cdcca2852703dec88317c389955f9a4885ce72147249a7e6be80a572c1b8 (image=quay.io/ceph/ceph:v19, name=clever_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Dec 15 10:35:32 compute-0 podman[79858]: 2025-12-15 10:35:32.77018278 +0000 UTC m=+0.130679903 container start ed76cdcca2852703dec88317c389955f9a4885ce72147249a7e6be80a572c1b8 (image=quay.io/ceph/ceph:v19, name=clever_hugle, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 15 10:35:32 compute-0 podman[79858]: 2025-12-15 10:35:32.774164244 +0000 UTC m=+0.134661397 container attach ed76cdcca2852703dec88317c389955f9a4885ce72147249a7e6be80a572c1b8 (image=quay.io/ceph/ceph:v19, name=clever_hugle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:35:32 compute-0 sudo[79913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 15 10:35:32 compute-0 sudo[79913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:33 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 15 10:35:33 compute-0 clever_hugle[79909]: 
Dec 15 10:35:33 compute-0 clever_hugle[79909]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 15 10:35:33 compute-0 systemd[1]: libpod-ed76cdcca2852703dec88317c389955f9a4885ce72147249a7e6be80a572c1b8.scope: Deactivated successfully.
Dec 15 10:35:33 compute-0 podman[79858]: 2025-12-15 10:35:33.160824212 +0000 UTC m=+0.521321335 container died ed76cdcca2852703dec88317c389955f9a4885ce72147249a7e6be80a572c1b8 (image=quay.io/ceph/ceph:v19, name=clever_hugle, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:35:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e60bfad4c08b34b90f83d3e66e8e7ab9eb4fba9bec10195512d7a159b852df3-merged.mount: Deactivated successfully.
Dec 15 10:35:33 compute-0 podman[79858]: 2025-12-15 10:35:33.305501548 +0000 UTC m=+0.665998671 container remove ed76cdcca2852703dec88317c389955f9a4885ce72147249a7e6be80a572c1b8 (image=quay.io/ceph/ceph:v19, name=clever_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 15 10:35:33 compute-0 systemd[1]: libpod-conmon-ed76cdcca2852703dec88317c389955f9a4885ce72147249a7e6be80a572c1b8.scope: Deactivated successfully.
Dec 15 10:35:33 compute-0 sudo[79827]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:33 compute-0 podman[80041]: 2025-12-15 10:35:33.523051212 +0000 UTC m=+0.122121297 container exec 79ed8dc51b1852fd831764c2817cfa3745ae4937bc9015bdb965e376ab3cc58d (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 15 10:35:33 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:33 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:33 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:33 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:33 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:33 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:33 compute-0 ceph-mon[74356]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:33 compute-0 podman[80041]: 2025-12-15 10:35:33.628559083 +0000 UTC m=+0.227629148 container exec_died 79ed8dc51b1852fd831764c2817cfa3745ae4937bc9015bdb965e376ab3cc58d (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:35:33 compute-0 sudo[80084]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qifljoruqlowqijzpmzhnvmqharbdtxz ; /usr/bin/python3'
Dec 15 10:35:33 compute-0 sudo[80084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:35:33 compute-0 python3[80092]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:35:33 compute-0 sudo[79913]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:35:33 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:35:33 compute-0 podman[80134]: 2025-12-15 10:35:33.845873352 +0000 UTC m=+0.038025131 container create f956141f12e8e8ca0fb25da2c492d9191ac163494da6f8a019a806d2e8d8858e (image=quay.io/ceph/ceph:v19, name=heuristic_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:35:33 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:35:33 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:35:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 15 10:35:33 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 15 10:35:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 15 10:35:33 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:33 compute-0 systemd[1]: Started libpod-conmon-f956141f12e8e8ca0fb25da2c492d9191ac163494da6f8a019a806d2e8d8858e.scope.
Dec 15 10:35:33 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe91fe5c4745bab3e056a146cca625fe6056b88e5cbde4cab99370f2fa60f18/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe91fe5c4745bab3e056a146cca625fe6056b88e5cbde4cab99370f2fa60f18/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe91fe5c4745bab3e056a146cca625fe6056b88e5cbde4cab99370f2fa60f18/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:33 compute-0 sudo[80148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 15 10:35:33 compute-0 sudo[80148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:33 compute-0 sudo[80148]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:33 compute-0 podman[80134]: 2025-12-15 10:35:33.913938073 +0000 UTC m=+0.106089872 container init f956141f12e8e8ca0fb25da2c492d9191ac163494da6f8a019a806d2e8d8858e (image=quay.io/ceph/ceph:v19, name=heuristic_colden, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:35:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Dec 15 10:35:33 compute-0 podman[80134]: 2025-12-15 10:35:33.920213006 +0000 UTC m=+0.112364785 container start f956141f12e8e8ca0fb25da2c492d9191ac163494da6f8a019a806d2e8d8858e (image=quay.io/ceph/ceph:v19, name=heuristic_colden, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 15 10:35:33 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Dec 15 10:35:33 compute-0 podman[80134]: 2025-12-15 10:35:33.829323589 +0000 UTC m=+0.021475388 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:33 compute-0 podman[80134]: 2025-12-15 10:35:33.937267055 +0000 UTC m=+0.129418854 container attach f956141f12e8e8ca0fb25da2c492d9191ac163494da6f8a019a806d2e8d8858e (image=quay.io/ceph/ceph:v19, name=heuristic_colden, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 15 10:35:33 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Dec 15 10:35:33 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Dec 15 10:35:33 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:33 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Dec 15 10:35:33 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Dec 15 10:35:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 15 10:35:33 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 15 10:35:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 15 10:35:33 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 15 10:35:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:35:33 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:35:33 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec 15 10:35:33 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec 15 10:35:34 compute-0 sudo[80177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:35:34 compute-0 sudo[80177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:34 compute-0 sudo[80177]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:34 compute-0 sudo[80204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:35:34 compute-0 sudo[80204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:34 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Dec 15 10:35:34 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/365204256' entity='client.admin' 
Dec 15 10:35:34 compute-0 systemd[1]: libpod-f956141f12e8e8ca0fb25da2c492d9191ac163494da6f8a019a806d2e8d8858e.scope: Deactivated successfully.
Dec 15 10:35:34 compute-0 podman[80134]: 2025-12-15 10:35:34.320648502 +0000 UTC m=+0.512800281 container died f956141f12e8e8ca0fb25da2c492d9191ac163494da6f8a019a806d2e8d8858e (image=quay.io/ceph/ceph:v19, name=heuristic_colden, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 15 10:35:34 compute-0 ansible-async_wrapper.py[78870]: Done in kid B.
Dec 15 10:35:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fe91fe5c4745bab3e056a146cca625fe6056b88e5cbde4cab99370f2fa60f18-merged.mount: Deactivated successfully.
Dec 15 10:35:34 compute-0 podman[80134]: 2025-12-15 10:35:34.477797505 +0000 UTC m=+0.669949284 container remove f956141f12e8e8ca0fb25da2c492d9191ac163494da6f8a019a806d2e8d8858e (image=quay.io/ceph/ceph:v19, name=heuristic_colden, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 15 10:35:34 compute-0 systemd[1]: libpod-conmon-f956141f12e8e8ca0fb25da2c492d9191ac163494da6f8a019a806d2e8d8858e.scope: Deactivated successfully.
Dec 15 10:35:34 compute-0 sudo[80084]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:34 compute-0 podman[80273]: 2025-12-15 10:35:34.519703514 +0000 UTC m=+0.155322527 container create ed02db14fe90fd6bd25bda83199470791973a09c42fd8bf6b19e0e30a8ebc5fc (image=quay.io/ceph/ceph:v19, name=clever_bouman, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:35:34 compute-0 systemd[1]: Started libpod-conmon-ed02db14fe90fd6bd25bda83199470791973a09c42fd8bf6b19e0e30a8ebc5fc.scope.
Dec 15 10:35:34 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:34 compute-0 podman[80273]: 2025-12-15 10:35:34.497599749 +0000 UTC m=+0.133218752 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:34 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:34 compute-0 sudo[80317]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpwlqzbxitxvskmumrwlajsgxanxcynv ; /usr/bin/python3'
Dec 15 10:35:34 compute-0 sudo[80317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:35:34 compute-0 podman[80273]: 2025-12-15 10:35:34.685082121 +0000 UTC m=+0.320701154 container init ed02db14fe90fd6bd25bda83199470791973a09c42fd8bf6b19e0e30a8ebc5fc (image=quay.io/ceph/ceph:v19, name=clever_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 15 10:35:34 compute-0 podman[80273]: 2025-12-15 10:35:34.690995725 +0000 UTC m=+0.326614728 container start ed02db14fe90fd6bd25bda83199470791973a09c42fd8bf6b19e0e30a8ebc5fc (image=quay.io/ceph/ceph:v19, name=clever_bouman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 15 10:35:34 compute-0 ceph-mon[74356]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 15 10:35:34 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:34 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:34 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:35:34 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 15 10:35:34 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:34 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:34 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:34 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:34 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:34 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 15 10:35:34 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 15 10:35:34 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:35:34 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/365204256' entity='client.admin' 
Dec 15 10:35:34 compute-0 systemd[1]: libpod-ed02db14fe90fd6bd25bda83199470791973a09c42fd8bf6b19e0e30a8ebc5fc.scope: Deactivated successfully.
Dec 15 10:35:34 compute-0 clever_bouman[80291]: 167 167
Dec 15 10:35:34 compute-0 conmon[80291]: conmon ed02db14fe90fd6bd25b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ed02db14fe90fd6bd25bda83199470791973a09c42fd8bf6b19e0e30a8ebc5fc.scope/container/memory.events
Dec 15 10:35:34 compute-0 podman[80273]: 2025-12-15 10:35:34.697254389 +0000 UTC m=+0.332873412 container attach ed02db14fe90fd6bd25bda83199470791973a09c42fd8bf6b19e0e30a8ebc5fc (image=quay.io/ceph/ceph:v19, name=clever_bouman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 15 10:35:34 compute-0 podman[80273]: 2025-12-15 10:35:34.700614914 +0000 UTC m=+0.336233927 container died ed02db14fe90fd6bd25bda83199470791973a09c42fd8bf6b19e0e30a8ebc5fc (image=quay.io/ceph/ceph:v19, name=clever_bouman, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:35:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0cf4d05bf66b2dc9c60b880f3c487c6b69450c85000600a7d0111c98aa853b5-merged.mount: Deactivated successfully.
Dec 15 10:35:34 compute-0 python3[80319]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:35:34 compute-0 podman[80273]: 2025-12-15 10:35:34.839143078 +0000 UTC m=+0.474762081 container remove ed02db14fe90fd6bd25bda83199470791973a09c42fd8bf6b19e0e30a8ebc5fc (image=quay.io/ceph/ceph:v19, name=clever_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 15 10:35:34 compute-0 systemd[1]: libpod-conmon-ed02db14fe90fd6bd25bda83199470791973a09c42fd8bf6b19e0e30a8ebc5fc.scope: Deactivated successfully.
Dec 15 10:35:34 compute-0 podman[80333]: 2025-12-15 10:35:34.92690067 +0000 UTC m=+0.096186894 container create 6d4a8297a093e3af679dceb833fb4a5436f06cb3a7eed5e5d078bcb33fa13641 (image=quay.io/ceph/ceph:v19, name=blissful_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 15 10:35:34 compute-0 podman[80333]: 2025-12-15 10:35:34.85211731 +0000 UTC m=+0.021403554 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:34 compute-0 sudo[80204]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:34 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:35:34 compute-0 systemd[1]: Started libpod-conmon-6d4a8297a093e3af679dceb833fb4a5436f06cb3a7eed5e5d078bcb33fa13641.scope.
Dec 15 10:35:34 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35ec80dc0cd240cbf5b80f8c973193ea7c3d14c1aabd824cd093728044def79f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35ec80dc0cd240cbf5b80f8c973193ea7c3d14c1aabd824cd093728044def79f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35ec80dc0cd240cbf5b80f8c973193ea7c3d14c1aabd824cd093728044def79f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:35 compute-0 podman[80333]: 2025-12-15 10:35:35.014828715 +0000 UTC m=+0.184114959 container init 6d4a8297a093e3af679dceb833fb4a5436f06cb3a7eed5e5d078bcb33fa13641 (image=quay.io/ceph/ceph:v19, name=blissful_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:35:35 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:35 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:35:35 compute-0 podman[80333]: 2025-12-15 10:35:35.021063239 +0000 UTC m=+0.190349463 container start 6d4a8297a093e3af679dceb833fb4a5436f06cb3a7eed5e5d078bcb33fa13641 (image=quay.io/ceph/ceph:v19, name=blissful_williams, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:35:35 compute-0 podman[80333]: 2025-12-15 10:35:35.04657328 +0000 UTC m=+0.215859514 container attach 6d4a8297a093e3af679dceb833fb4a5436f06cb3a7eed5e5d078bcb33fa13641 (image=quay.io/ceph/ceph:v19, name=blissful_williams, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:35:35 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:35 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.difmqj (unknown last config time)...
Dec 15 10:35:35 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.difmqj (unknown last config time)...
Dec 15 10:35:35 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.difmqj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 15 10:35:35 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.difmqj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 15 10:35:35 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 15 10:35:35 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 15 10:35:35 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:35:35 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:35:35 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.difmqj on compute-0
Dec 15 10:35:35 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.difmqj on compute-0
Dec 15 10:35:35 compute-0 sudo[80352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:35:35 compute-0 sudo[80352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:35 compute-0 sudo[80352]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:35 compute-0 sudo[80379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:35:35 compute-0 sudo[80379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:35 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Dec 15 10:35:35 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1330255145' entity='client.admin' 
Dec 15 10:35:35 compute-0 systemd[1]: libpod-6d4a8297a093e3af679dceb833fb4a5436f06cb3a7eed5e5d078bcb33fa13641.scope: Deactivated successfully.
Dec 15 10:35:35 compute-0 podman[80333]: 2025-12-15 10:35:35.437115568 +0000 UTC m=+0.606401792 container died 6d4a8297a093e3af679dceb833fb4a5436f06cb3a7eed5e5d078bcb33fa13641 (image=quay.io/ceph/ceph:v19, name=blissful_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:35:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-35ec80dc0cd240cbf5b80f8c973193ea7c3d14c1aabd824cd093728044def79f-merged.mount: Deactivated successfully.
Dec 15 10:35:35 compute-0 podman[80333]: 2025-12-15 10:35:35.480779943 +0000 UTC m=+0.650066167 container remove 6d4a8297a093e3af679dceb833fb4a5436f06cb3a7eed5e5d078bcb33fa13641 (image=quay.io/ceph/ceph:v19, name=blissful_williams, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:35:35 compute-0 systemd[1]: libpod-conmon-6d4a8297a093e3af679dceb833fb4a5436f06cb3a7eed5e5d078bcb33fa13641.scope: Deactivated successfully.
Dec 15 10:35:35 compute-0 sudo[80317]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:35 compute-0 podman[80439]: 2025-12-15 10:35:35.501728232 +0000 UTC m=+0.061544970 container create 7586f6df7f6aa319685f2c527124809ab90000605b8d74755c100042ec617ea4 (image=quay.io/ceph/ceph:v19, name=thirsty_curie, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:35:35 compute-0 systemd[1]: Started libpod-conmon-7586f6df7f6aa319685f2c527124809ab90000605b8d74755c100042ec617ea4.scope.
Dec 15 10:35:35 compute-0 podman[80439]: 2025-12-15 10:35:35.46360789 +0000 UTC m=+0.023424648 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:35 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:35 compute-0 podman[80439]: 2025-12-15 10:35:35.577583334 +0000 UTC m=+0.137400092 container init 7586f6df7f6aa319685f2c527124809ab90000605b8d74755c100042ec617ea4 (image=quay.io/ceph/ceph:v19, name=thirsty_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 15 10:35:35 compute-0 podman[80439]: 2025-12-15 10:35:35.58294684 +0000 UTC m=+0.142763568 container start 7586f6df7f6aa319685f2c527124809ab90000605b8d74755c100042ec617ea4 (image=quay.io/ceph/ceph:v19, name=thirsty_curie, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:35:35 compute-0 thirsty_curie[80468]: 167 167
Dec 15 10:35:35 compute-0 podman[80439]: 2025-12-15 10:35:35.586765088 +0000 UTC m=+0.146581836 container attach 7586f6df7f6aa319685f2c527124809ab90000605b8d74755c100042ec617ea4 (image=quay.io/ceph/ceph:v19, name=thirsty_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:35:35 compute-0 systemd[1]: libpod-7586f6df7f6aa319685f2c527124809ab90000605b8d74755c100042ec617ea4.scope: Deactivated successfully.
Dec 15 10:35:35 compute-0 podman[80439]: 2025-12-15 10:35:35.587871943 +0000 UTC m=+0.147688691 container died 7586f6df7f6aa319685f2c527124809ab90000605b8d74755c100042ec617ea4 (image=quay.io/ceph/ceph:v19, name=thirsty_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 15 10:35:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fc12e957cb59d9609fa012f2bad34b194395b566e92666b424a00464d53810e-merged.mount: Deactivated successfully.
Dec 15 10:35:35 compute-0 podman[80439]: 2025-12-15 10:35:35.621209777 +0000 UTC m=+0.181026515 container remove 7586f6df7f6aa319685f2c527124809ab90000605b8d74755c100042ec617ea4 (image=quay.io/ceph/ceph:v19, name=thirsty_curie, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:35:35 compute-0 systemd[1]: libpod-conmon-7586f6df7f6aa319685f2c527124809ab90000605b8d74755c100042ec617ea4.scope: Deactivated successfully.
Dec 15 10:35:35 compute-0 sudo[80379]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:35 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:35:35 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:35 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:35:35 compute-0 sudo[80507]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbtfbjrvkobhjgtnhpaginmszgtumsow ; /usr/bin/python3'
Dec 15 10:35:35 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:35 compute-0 sudo[80507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:35:35 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:35:35 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:35:35 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 15 10:35:35 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 15 10:35:35 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 15 10:35:35 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:35 compute-0 ceph-mon[74356]: Reconfiguring mon.compute-0 (unknown last config time)...
Dec 15 10:35:35 compute-0 ceph-mon[74356]: Reconfiguring daemon mon.compute-0 on compute-0
Dec 15 10:35:35 compute-0 ceph-mon[74356]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:35 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:35 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:35 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.difmqj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 15 10:35:35 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 15 10:35:35 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:35:35 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/1330255145' entity='client.admin' 
Dec 15 10:35:35 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:35 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:35 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:35:35 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 15 10:35:35 compute-0 sudo[80510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 15 10:35:35 compute-0 sudo[80510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:35 compute-0 sudo[80510]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:35 compute-0 ceph-mgr[74651]: [progress INFO root] Writing back 1 completed events
Dec 15 10:35:35 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 15 10:35:35 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:35 compute-0 python3[80509]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:35:35 compute-0 podman[80535]: 2025-12-15 10:35:35.882049484 +0000 UTC m=+0.044732129 container create 385e91bd156553fa12e16b349b90fe17812391551a7b47c4cac6fa81b7baf7bf (image=quay.io/ceph/ceph:v19, name=thirsty_gates, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:35:35 compute-0 systemd[1]: Started libpod-conmon-385e91bd156553fa12e16b349b90fe17812391551a7b47c4cac6fa81b7baf7bf.scope.
Dec 15 10:35:35 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177886cdcdaeb50c3b2c5f8e64c797cd925ce69568cd1ee66173cdf4b3177476/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177886cdcdaeb50c3b2c5f8e64c797cd925ce69568cd1ee66173cdf4b3177476/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177886cdcdaeb50c3b2c5f8e64c797cd925ce69568cd1ee66173cdf4b3177476/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:35 compute-0 podman[80535]: 2025-12-15 10:35:35.951625391 +0000 UTC m=+0.114308066 container init 385e91bd156553fa12e16b349b90fe17812391551a7b47c4cac6fa81b7baf7bf (image=quay.io/ceph/ceph:v19, name=thirsty_gates, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 15 10:35:35 compute-0 podman[80535]: 2025-12-15 10:35:35.864742057 +0000 UTC m=+0.027424733 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:35 compute-0 podman[80535]: 2025-12-15 10:35:35.95997636 +0000 UTC m=+0.122659005 container start 385e91bd156553fa12e16b349b90fe17812391551a7b47c4cac6fa81b7baf7bf (image=quay.io/ceph/ceph:v19, name=thirsty_gates, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:35:35 compute-0 podman[80535]: 2025-12-15 10:35:35.965170001 +0000 UTC m=+0.127852646 container attach 385e91bd156553fa12e16b349b90fe17812391551a7b47c4cac6fa81b7baf7bf (image=quay.io/ceph/ceph:v19, name=thirsty_gates, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 15 10:35:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:35:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Dec 15 10:35:36 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1539864162' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec 15 10:35:36 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:36 compute-0 ceph-mon[74356]: Reconfiguring mgr.compute-0.difmqj (unknown last config time)...
Dec 15 10:35:36 compute-0 ceph-mon[74356]: Reconfiguring daemon mgr.compute-0.difmqj on compute-0
Dec 15 10:35:36 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:36 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:36 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/1539864162' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec 15 10:35:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Dec 15 10:35:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 15 10:35:36 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1539864162' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 15 10:35:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Dec 15 10:35:36 compute-0 thirsty_gates[80550]: set require_min_compat_client to mimic
Dec 15 10:35:36 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Dec 15 10:35:36 compute-0 systemd[1]: libpod-385e91bd156553fa12e16b349b90fe17812391551a7b47c4cac6fa81b7baf7bf.scope: Deactivated successfully.
Dec 15 10:35:36 compute-0 podman[80535]: 2025-12-15 10:35:36.843993829 +0000 UTC m=+1.006676494 container died 385e91bd156553fa12e16b349b90fe17812391551a7b47c4cac6fa81b7baf7bf (image=quay.io/ceph/ceph:v19, name=thirsty_gates, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:35:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-177886cdcdaeb50c3b2c5f8e64c797cd925ce69568cd1ee66173cdf4b3177476-merged.mount: Deactivated successfully.
Dec 15 10:35:36 compute-0 podman[80535]: 2025-12-15 10:35:36.919266733 +0000 UTC m=+1.081949378 container remove 385e91bd156553fa12e16b349b90fe17812391551a7b47c4cac6fa81b7baf7bf (image=quay.io/ceph/ceph:v19, name=thirsty_gates, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:35:36 compute-0 systemd[1]: libpod-conmon-385e91bd156553fa12e16b349b90fe17812391551a7b47c4cac6fa81b7baf7bf.scope: Deactivated successfully.
Dec 15 10:35:36 compute-0 sudo[80507]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:37 compute-0 sudo[80612]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmuophstxvghyejopcpzlyxyfnbueuwb ; /usr/bin/python3'
Dec 15 10:35:37 compute-0 sudo[80612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:35:37 compute-0 python3[80614]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:35:37 compute-0 podman[80615]: 2025-12-15 10:35:37.612539058 +0000 UTC m=+0.046093270 container create 2a8a9789a74301c18ded798b0c628ac29eba84c44ee0ec2d6450b20d98c92205 (image=quay.io/ceph/ceph:v19, name=amazing_booth, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:35:37 compute-0 systemd[1]: Started libpod-conmon-2a8a9789a74301c18ded798b0c628ac29eba84c44ee0ec2d6450b20d98c92205.scope.
Dec 15 10:35:37 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9b200030f42dd9de7820a8aed8a16e4dab8d7ebb8da309c05df32d7eefc235d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9b200030f42dd9de7820a8aed8a16e4dab8d7ebb8da309c05df32d7eefc235d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9b200030f42dd9de7820a8aed8a16e4dab8d7ebb8da309c05df32d7eefc235d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:37 compute-0 podman[80615]: 2025-12-15 10:35:37.684855411 +0000 UTC m=+0.118409623 container init 2a8a9789a74301c18ded798b0c628ac29eba84c44ee0ec2d6450b20d98c92205 (image=quay.io/ceph/ceph:v19, name=amazing_booth, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:35:37 compute-0 podman[80615]: 2025-12-15 10:35:37.592993122 +0000 UTC m=+0.026547344 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:37 compute-0 podman[80615]: 2025-12-15 10:35:37.691763914 +0000 UTC m=+0.125318126 container start 2a8a9789a74301c18ded798b0c628ac29eba84c44ee0ec2d6450b20d98c92205 (image=quay.io/ceph/ceph:v19, name=amazing_booth, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True)
Dec 15 10:35:37 compute-0 podman[80615]: 2025-12-15 10:35:37.69546937 +0000 UTC m=+0.129023582 container attach 2a8a9789a74301c18ded798b0c628ac29eba84c44ee0ec2d6450b20d98c92205 (image=quay.io/ceph/ceph:v19, name=amazing_booth, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 15 10:35:37 compute-0 ceph-mon[74356]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:37 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/1539864162' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 15 10:35:37 compute-0 ceph-mon[74356]: osdmap e3: 0 total, 0 up, 0 in
Dec 15 10:35:38 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:35:38 compute-0 sudo[80653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:35:38 compute-0 sudo[80653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:38 compute-0 sudo[80653]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:38 compute-0 sudo[80678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Dec 15 10:35:38 compute-0 sudo[80678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:38 compute-0 sudo[80678]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:38 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 15 10:35:38 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:38 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 15 10:35:38 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:38 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 15 10:35:38 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:38 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 15 10:35:38 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:38 compute-0 ceph-mgr[74651]: [cephadm INFO root] Added host compute-0
Dec 15 10:35:38 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 15 10:35:38 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:35:38 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:35:38 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 15 10:35:38 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 15 10:35:38 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 15 10:35:38 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:38 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:38 compute-0 sudo[80722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 15 10:35:38 compute-0 sudo[80722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:35:38 compute-0 sudo[80722]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:39 compute-0 ceph-mon[74356]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:35:39 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:39 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:39 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:39 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:39 compute-0 ceph-mon[74356]: Added host compute-0
Dec 15 10:35:39 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:35:39 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 15 10:35:39 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:39 compute-0 ceph-mon[74356]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:39 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Dec 15 10:35:39 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Dec 15 10:35:40 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:40 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:35:40 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:35:40 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:35:40 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:35:40 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:35:40 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:35:41 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:35:41 compute-0 ceph-mon[74356]: Deploying cephadm binary to compute-1
Dec 15 10:35:41 compute-0 ceph-mon[74356]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:42 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:43 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 15 10:35:43 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:43 compute-0 ceph-mgr[74651]: [cephadm INFO root] Added host compute-1
Dec 15 10:35:43 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Added host compute-1
Dec 15 10:35:43 compute-0 ceph-mon[74356]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:43 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:43 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:35:43 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:35:44 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:44 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Dec 15 10:35:44 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Dec 15 10:35:44 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:44 compute-0 ceph-mon[74356]: Added host compute-1
Dec 15 10:35:44 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:44 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:45 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:35:45 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:45 compute-0 ceph-mon[74356]: Deploying cephadm binary to compute-2
Dec 15 10:35:45 compute-0 ceph-mon[74356]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:45 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:35:46 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:47 compute-0 ceph-mon[74356]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:48 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 15 10:35:48 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:48 compute-0 ceph-mgr[74651]: [cephadm INFO root] Added host compute-2
Dec 15 10:35:48 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Added host compute-2
Dec 15 10:35:48 compute-0 ceph-mgr[74651]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Dec 15 10:35:48 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Dec 15 10:35:48 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 15 10:35:48 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:48 compute-0 ceph-mgr[74651]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec 15 10:35:48 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec 15 10:35:48 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 15 10:35:48 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:48 compute-0 ceph-mgr[74651]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Dec 15 10:35:48 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Dec 15 10:35:48 compute-0 ceph-mgr[74651]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Dec 15 10:35:48 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Dec 15 10:35:48 compute-0 ceph-mgr[74651]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec 15 10:35:48 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec 15 10:35:48 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Dec 15 10:35:48 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:48 compute-0 amazing_booth[80629]: Added host 'compute-0' with addr '192.168.122.100'
Dec 15 10:35:48 compute-0 amazing_booth[80629]: Added host 'compute-1' with addr '192.168.122.101'
Dec 15 10:35:48 compute-0 amazing_booth[80629]: Added host 'compute-2' with addr '192.168.122.102'
Dec 15 10:35:48 compute-0 amazing_booth[80629]: Scheduled mon update...
Dec 15 10:35:48 compute-0 amazing_booth[80629]: Scheduled mgr update...
Dec 15 10:35:48 compute-0 amazing_booth[80629]: Scheduled osd.default_drive_group update...
Dec 15 10:35:48 compute-0 systemd[1]: libpod-2a8a9789a74301c18ded798b0c628ac29eba84c44ee0ec2d6450b20d98c92205.scope: Deactivated successfully.
Dec 15 10:35:48 compute-0 podman[80615]: 2025-12-15 10:35:48.463763102 +0000 UTC m=+10.897317314 container died 2a8a9789a74301c18ded798b0c628ac29eba84c44ee0ec2d6450b20d98c92205 (image=quay.io/ceph/ceph:v19, name=amazing_booth, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:35:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9b200030f42dd9de7820a8aed8a16e4dab8d7ebb8da309c05df32d7eefc235d-merged.mount: Deactivated successfully.
Dec 15 10:35:48 compute-0 podman[80615]: 2025-12-15 10:35:48.606112296 +0000 UTC m=+11.039666508 container remove 2a8a9789a74301c18ded798b0c628ac29eba84c44ee0ec2d6450b20d98c92205 (image=quay.io/ceph/ceph:v19, name=amazing_booth, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:35:48 compute-0 systemd[1]: libpod-conmon-2a8a9789a74301c18ded798b0c628ac29eba84c44ee0ec2d6450b20d98c92205.scope: Deactivated successfully.
Dec 15 10:35:48 compute-0 sudo[80612]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:48 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:48 compute-0 sudo[80781]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkqznkfoxhoqdjuadatiurxxhzjcpqcw ; /usr/bin/python3'
Dec 15 10:35:48 compute-0 sudo[80781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:35:49 compute-0 python3[80783]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:35:49 compute-0 podman[80785]: 2025-12-15 10:35:49.088887184 +0000 UTC m=+0.041611921 container create 6a70f2199618a59c58b812f8497af07411d156000ce234a6d1faeb039898a8b3 (image=quay.io/ceph/ceph:v19, name=magical_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 15 10:35:49 compute-0 systemd[1]: Started libpod-conmon-6a70f2199618a59c58b812f8497af07411d156000ce234a6d1faeb039898a8b3.scope.
Dec 15 10:35:49 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:35:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bc9129cdf2292fbf98e83132698ef77bfe5d1a43f53b018b532bba758e4a096/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bc9129cdf2292fbf98e83132698ef77bfe5d1a43f53b018b532bba758e4a096/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bc9129cdf2292fbf98e83132698ef77bfe5d1a43f53b018b532bba758e4a096/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:35:49 compute-0 podman[80785]: 2025-12-15 10:35:49.069630127 +0000 UTC m=+0.022354884 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:35:49 compute-0 podman[80785]: 2025-12-15 10:35:49.169061589 +0000 UTC m=+0.121786356 container init 6a70f2199618a59c58b812f8497af07411d156000ce234a6d1faeb039898a8b3 (image=quay.io/ceph/ceph:v19, name=magical_davinci, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 15 10:35:49 compute-0 podman[80785]: 2025-12-15 10:35:49.176444049 +0000 UTC m=+0.129168786 container start 6a70f2199618a59c58b812f8497af07411d156000ce234a6d1faeb039898a8b3 (image=quay.io/ceph/ceph:v19, name=magical_davinci, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 15 10:35:49 compute-0 podman[80785]: 2025-12-15 10:35:49.179994479 +0000 UTC m=+0.132719216 container attach 6a70f2199618a59c58b812f8497af07411d156000ce234a6d1faeb039898a8b3 (image=quay.io/ceph/ceph:v19, name=magical_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:35:49 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:49 compute-0 ceph-mon[74356]: Added host compute-2
Dec 15 10:35:49 compute-0 ceph-mon[74356]: Saving service mon spec with placement compute-0;compute-1;compute-2
Dec 15 10:35:49 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:49 compute-0 ceph-mon[74356]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec 15 10:35:49 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:49 compute-0 ceph-mon[74356]: Marking host: compute-0 for OSDSpec preview refresh.
Dec 15 10:35:49 compute-0 ceph-mon[74356]: Marking host: compute-1 for OSDSpec preview refresh.
Dec 15 10:35:49 compute-0 ceph-mon[74356]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec 15 10:35:49 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:35:49 compute-0 ceph-mon[74356]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:49 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 15 10:35:49 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1598370591' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 15 10:35:49 compute-0 magical_davinci[80801]: 
Dec 15 10:35:49 compute-0 magical_davinci[80801]: {"fsid":"77365f67-614e-5a8d-b658-640395550c79","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":58,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-15T10:34:49:065673+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-15T10:34:49.067589+0000","services":{}},"progress_events":{}}
Dec 15 10:35:49 compute-0 systemd[1]: libpod-6a70f2199618a59c58b812f8497af07411d156000ce234a6d1faeb039898a8b3.scope: Deactivated successfully.
Dec 15 10:35:49 compute-0 podman[80785]: 2025-12-15 10:35:49.68544011 +0000 UTC m=+0.638164847 container died 6a70f2199618a59c58b812f8497af07411d156000ce234a6d1faeb039898a8b3 (image=quay.io/ceph/ceph:v19, name=magical_davinci, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 15 10:35:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bc9129cdf2292fbf98e83132698ef77bfe5d1a43f53b018b532bba758e4a096-merged.mount: Deactivated successfully.
Dec 15 10:35:49 compute-0 podman[80785]: 2025-12-15 10:35:49.726216804 +0000 UTC m=+0.678941541 container remove 6a70f2199618a59c58b812f8497af07411d156000ce234a6d1faeb039898a8b3 (image=quay.io/ceph/ceph:v19, name=magical_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 15 10:35:49 compute-0 systemd[1]: libpod-conmon-6a70f2199618a59c58b812f8497af07411d156000ce234a6d1faeb039898a8b3.scope: Deactivated successfully.
Dec 15 10:35:49 compute-0 sudo[80781]: pam_unix(sudo:session): session closed for user root
Dec 15 10:35:50 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/1598370591' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 15 10:35:50 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:51 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:35:51 compute-0 ceph-mon[74356]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:52 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:53 compute-0 ceph-mon[74356]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:54 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:54 compute-0 ceph-mon[74356]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:56 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:35:56 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:57 compute-0 ceph-mon[74356]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:58 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:35:59 compute-0 ceph-mon[74356]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:00 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:00 compute-0 ceph-mon[74356]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:01 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:36:02 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:03 compute-0 ceph-mon[74356]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:04 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:05 compute-0 ceph-mon[74356]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:36:06 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:06 compute-0 ceph-mon[74356]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:08 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:09 compute-0 ceph-mon[74356]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:10 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:10 compute-0 ceph-mgr[74651]: [balancer INFO root] Optimize plan auto_2025-12-15_10:36:10
Dec 15 10:36:10 compute-0 ceph-mgr[74651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 15 10:36:10 compute-0 ceph-mgr[74651]: [balancer INFO root] do_upmap
Dec 15 10:36:10 compute-0 ceph-mgr[74651]: [balancer INFO root] No pools available
Dec 15 10:36:10 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 15 10:36:10 compute-0 ceph-mgr[74651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 15 10:36:10 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:36:10 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:36:10 compute-0 ceph-mgr[74651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 15 10:36:10 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:36:10 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:36:10 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:36:10 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:36:10 compute-0 ceph-mon[74356]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:11 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:36:12 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:13 compute-0 ceph-mon[74356]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:14 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:15 compute-0 ceph-mon[74356]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:16 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:36:16 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:16 compute-0 ceph-mon[74356]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:18 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:19 compute-0 ceph-mon[74356]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:19 compute-0 sudo[80864]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urytbjvadwbqggwoomtplxfcldsqebzj ; /usr/bin/python3'
Dec 15 10:36:19 compute-0 sudo[80864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:36:19 compute-0 python3[80866]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:36:20 compute-0 podman[80868]: 2025-12-15 10:36:20.049007277 +0000 UTC m=+0.052494265 container create 9125e7ae20a85a98f68d37b5021374f02db6b355d07242095f5052ca25f3de54 (image=quay.io/ceph/ceph:v19, name=exciting_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 15 10:36:20 compute-0 systemd[1]: Started libpod-conmon-9125e7ae20a85a98f68d37b5021374f02db6b355d07242095f5052ca25f3de54.scope.
Dec 15 10:36:20 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:36:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f33f8b1243137234332aa3c8c7132f4945c8049c456d87dd84132797a4a17d81/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f33f8b1243137234332aa3c8c7132f4945c8049c456d87dd84132797a4a17d81/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f33f8b1243137234332aa3c8c7132f4945c8049c456d87dd84132797a4a17d81/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:20 compute-0 podman[80868]: 2025-12-15 10:36:20.015456594 +0000 UTC m=+0.018943612 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:36:20 compute-0 podman[80868]: 2025-12-15 10:36:20.162486339 +0000 UTC m=+0.165973337 container init 9125e7ae20a85a98f68d37b5021374f02db6b355d07242095f5052ca25f3de54 (image=quay.io/ceph/ceph:v19, name=exciting_vaughan, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:36:20 compute-0 podman[80868]: 2025-12-15 10:36:20.167837306 +0000 UTC m=+0.171324294 container start 9125e7ae20a85a98f68d37b5021374f02db6b355d07242095f5052ca25f3de54 (image=quay.io/ceph/ceph:v19, name=exciting_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 15 10:36:20 compute-0 podman[80868]: 2025-12-15 10:36:20.229016201 +0000 UTC m=+0.232503189 container attach 9125e7ae20a85a98f68d37b5021374f02db6b355d07242095f5052ca25f3de54 (image=quay.io/ceph/ceph:v19, name=exciting_vaughan, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:36:20 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:36:20 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:20 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:36:20 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:20 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:36:20 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:20 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:36:20 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:20 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec 15 10:36:20 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 15 10:36:20 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:36:20 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:36:20 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 15 10:36:20 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 15 10:36:20 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec 15 10:36:20 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec 15 10:36:20 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 15 10:36:20 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/891914813' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 15 10:36:20 compute-0 exciting_vaughan[80884]: 
Dec 15 10:36:20 compute-0 exciting_vaughan[80884]: {"fsid":"77365f67-614e-5a8d-b658-640395550c79","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":89,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-15T10:34:49:065673+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-15T10:36:12.638692+0000","services":{}},"progress_events":{}}
Dec 15 10:36:20 compute-0 systemd[1]: libpod-9125e7ae20a85a98f68d37b5021374f02db6b355d07242095f5052ca25f3de54.scope: Deactivated successfully.
Dec 15 10:36:20 compute-0 podman[80868]: 2025-12-15 10:36:20.626270982 +0000 UTC m=+0.629757970 container died 9125e7ae20a85a98f68d37b5021374f02db6b355d07242095f5052ca25f3de54 (image=quay.io/ceph/ceph:v19, name=exciting_vaughan, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:36:20 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-f33f8b1243137234332aa3c8c7132f4945c8049c456d87dd84132797a4a17d81-merged.mount: Deactivated successfully.
Dec 15 10:36:20 compute-0 podman[80868]: 2025-12-15 10:36:20.837587687 +0000 UTC m=+0.841074675 container remove 9125e7ae20a85a98f68d37b5021374f02db6b355d07242095f5052ca25f3de54 (image=quay.io/ceph/ceph:v19, name=exciting_vaughan, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 15 10:36:20 compute-0 systemd[1]: libpod-conmon-9125e7ae20a85a98f68d37b5021374f02db6b355d07242095f5052ca25f3de54.scope: Deactivated successfully.
Dec 15 10:36:20 compute-0 sudo[80864]: pam_unix(sudo:session): session closed for user root
Dec 15 10:36:21 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:36:21 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:36:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:36:21 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:21 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:21 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:21 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:21 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 15 10:36:21 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:36:21 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 15 10:36:21 compute-0 ceph-mon[74356]: Updating compute-1:/etc/ceph/ceph.conf
Dec 15 10:36:21 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/891914813' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 15 10:36:21 compute-0 ceph-mon[74356]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:21 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:36:21 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:36:22 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:36:22 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:36:22 compute-0 ceph-mon[74356]: Updating compute-1:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:36:22 compute-0 ceph-mon[74356]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:36:22 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:22 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:36:22 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:22 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:36:22 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:22 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 15 10:36:22 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:22 compute-0 ceph-mgr[74651]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec 15 10:36:22 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec 15 10:36:22 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:22 compute-0 ceph-mgr[74651]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec 15 10:36:22 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec 15 10:36:22 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:22 compute-0 ceph-mgr[74651]: [progress INFO root] update: starting ev 3fb3998b-9224-4bb4-bea1-ffde44597c31 (Updating crash deployment (+1 -> 2))
Dec 15 10:36:22 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 15 10:36:22 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 15 10:36:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:36:22.792+0000 7fc9ec019640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Dec 15 10:36:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: service_name: mon
Dec 15 10:36:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: placement:
Dec 15 10:36:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]:   hosts:
Dec 15 10:36:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]:   - compute-0
Dec 15 10:36:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]:   - compute-1
Dec 15 10:36:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]:   - compute-2
Dec 15 10:36:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec 15 10:36:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:36:22.793+0000 7fc9ec019640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Dec 15 10:36:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: service_name: mgr
Dec 15 10:36:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: placement:
Dec 15 10:36:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]:   hosts:
Dec 15 10:36:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]:   - compute-0
Dec 15 10:36:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]:   - compute-1
Dec 15 10:36:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]:   - compute-2
Dec 15 10:36:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec 15 10:36:22 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 15 10:36:22 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:36:22 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:36:22 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Dec 15 10:36:22 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Dec 15 10:36:23 compute-0 ceph-mon[74356]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Dec 15 10:36:23 compute-0 ceph-mon[74356]: Updating compute-1:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:36:23 compute-0 ceph-mon[74356]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:23 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:23 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:23 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:23 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 15 10:36:23 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 15 10:36:23 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:36:23 compute-0 ceph-mon[74356]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Dec 15 10:36:24 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:24 compute-0 ceph-mon[74356]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec 15 10:36:24 compute-0 ceph-mon[74356]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:24 compute-0 ceph-mon[74356]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec 15 10:36:24 compute-0 ceph-mon[74356]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:24 compute-0 ceph-mon[74356]: Deploying daemon crash.compute-1 on compute-1
Dec 15 10:36:25 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:36:25 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:25 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:36:25 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:25 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 15 10:36:25 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:25 compute-0 ceph-mgr[74651]: [progress INFO root] complete: finished ev 3fb3998b-9224-4bb4-bea1-ffde44597c31 (Updating crash deployment (+1 -> 2))
Dec 15 10:36:25 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event 3fb3998b-9224-4bb4-bea1-ffde44597c31 (Updating crash deployment (+1 -> 2)) in 3 seconds
Dec 15 10:36:25 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 15 10:36:25 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:25 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 15 10:36:25 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 15 10:36:25 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 15 10:36:25 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 15 10:36:25 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:36:25 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:36:25 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 15 10:36:25 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 15 10:36:25 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:36:25 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:36:25 compute-0 sudo[80922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:36:25 compute-0 sudo[80922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:36:25 compute-0 sudo[80922]: pam_unix(sudo:session): session closed for user root
Dec 15 10:36:25 compute-0 sudo[80947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 77365f67-614e-5a8d-b658-640395550c79 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 15 10:36:25 compute-0 sudo[80947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:36:25 compute-0 ceph-mgr[74651]: [progress INFO root] Writing back 2 completed events
Dec 15 10:36:25 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 15 10:36:25 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:25 compute-0 podman[81012]: 2025-12-15 10:36:25.866825374 +0000 UTC m=+0.034321655 container create 495c432d99ca4533f00502f928e7fbcbf3f319b33e46f883f8a26b4b52dc5c58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_hawking, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 15 10:36:25 compute-0 systemd[1]: Started libpod-conmon-495c432d99ca4533f00502f928e7fbcbf3f319b33e46f883f8a26b4b52dc5c58.scope.
Dec 15 10:36:25 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:36:25 compute-0 podman[81012]: 2025-12-15 10:36:25.925040846 +0000 UTC m=+0.092537147 container init 495c432d99ca4533f00502f928e7fbcbf3f319b33e46f883f8a26b4b52dc5c58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_hawking, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:36:25 compute-0 podman[81012]: 2025-12-15 10:36:25.932781749 +0000 UTC m=+0.100278020 container start 495c432d99ca4533f00502f928e7fbcbf3f319b33e46f883f8a26b4b52dc5c58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_hawking, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 15 10:36:25 compute-0 inspiring_hawking[81028]: 167 167
Dec 15 10:36:25 compute-0 systemd[1]: libpod-495c432d99ca4533f00502f928e7fbcbf3f319b33e46f883f8a26b4b52dc5c58.scope: Deactivated successfully.
Dec 15 10:36:25 compute-0 podman[81012]: 2025-12-15 10:36:25.937856849 +0000 UTC m=+0.105353130 container attach 495c432d99ca4533f00502f928e7fbcbf3f319b33e46f883f8a26b4b52dc5c58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_hawking, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:36:25 compute-0 podman[81012]: 2025-12-15 10:36:25.938116977 +0000 UTC m=+0.105613248 container died 495c432d99ca4533f00502f928e7fbcbf3f319b33e46f883f8a26b4b52dc5c58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_hawking, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec 15 10:36:25 compute-0 podman[81012]: 2025-12-15 10:36:25.851895273 +0000 UTC m=+0.019391554 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:36:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fa541ea24463335cfb5ac6452afc8bb6f8addd20aa5905d1aa7d91f01dadc61-merged.mount: Deactivated successfully.
Dec 15 10:36:25 compute-0 podman[81012]: 2025-12-15 10:36:25.971889516 +0000 UTC m=+0.139385807 container remove 495c432d99ca4533f00502f928e7fbcbf3f319b33e46f883f8a26b4b52dc5c58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:36:25 compute-0 systemd[1]: libpod-conmon-495c432d99ca4533f00502f928e7fbcbf3f319b33e46f883f8a26b4b52dc5c58.scope: Deactivated successfully.
Dec 15 10:36:26 compute-0 podman[81052]: 2025-12-15 10:36:26.110337865 +0000 UTC m=+0.038632844 container create 31be8a159620b019c9532491833ff6443b1e9782d7e2491b38da81642669501b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_borg, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 15 10:36:26 compute-0 systemd[1]: Started libpod-conmon-31be8a159620b019c9532491833ff6443b1e9782d7e2491b38da81642669501b.scope.
Dec 15 10:36:26 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:36:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf07fde13df0482e779d43c33d065b1d936ed472d4b2400741f799257c02002/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf07fde13df0482e779d43c33d065b1d936ed472d4b2400741f799257c02002/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf07fde13df0482e779d43c33d065b1d936ed472d4b2400741f799257c02002/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf07fde13df0482e779d43c33d065b1d936ed472d4b2400741f799257c02002/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf07fde13df0482e779d43c33d065b1d936ed472d4b2400741f799257c02002/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:26 compute-0 podman[81052]: 2025-12-15 10:36:26.185406381 +0000 UTC m=+0.113701390 container init 31be8a159620b019c9532491833ff6443b1e9782d7e2491b38da81642669501b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_borg, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 15 10:36:26 compute-0 podman[81052]: 2025-12-15 10:36:26.093043909 +0000 UTC m=+0.021338938 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:36:26 compute-0 podman[81052]: 2025-12-15 10:36:26.19337141 +0000 UTC m=+0.121666389 container start 31be8a159620b019c9532491833ff6443b1e9782d7e2491b38da81642669501b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:36:26 compute-0 podman[81052]: 2025-12-15 10:36:26.196819605 +0000 UTC m=+0.125114614 container attach 31be8a159620b019c9532491833ff6443b1e9782d7e2491b38da81642669501b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 15 10:36:26 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:36:26 compute-0 ceph-mon[74356]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 15 10:36:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 15 10:36:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:36:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 15 10:36:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:36:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:26 compute-0 charming_borg[81068]: --> passed data devices: 0 physical, 1 LVM
Dec 15 10:36:26 compute-0 charming_borg[81068]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 15 10:36:26 compute-0 charming_borg[81068]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 15 10:36:26 compute-0 charming_borg[81068]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 7690eca0-4e87-4157-a045-1912448da925
Dec 15 10:36:26 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "7690eca0-4e87-4157-a045-1912448da925"} v 0)
Dec 15 10:36:27 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4075126690' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7690eca0-4e87-4157-a045-1912448da925"}]: dispatch
Dec 15 10:36:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Dec 15 10:36:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 15 10:36:27 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4075126690' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7690eca0-4e87-4157-a045-1912448da925"}]': finished
Dec 15 10:36:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Dec 15 10:36:27 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Dec 15 10:36:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 15 10:36:27 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:27 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 15 10:36:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "13bebb76-41f5-4f38-b05e-545f7fa0c450"} v 0)
Dec 15 10:36:27 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/549754327' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "13bebb76-41f5-4f38-b05e-545f7fa0c450"}]: dispatch
Dec 15 10:36:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Dec 15 10:36:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 15 10:36:27 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/549754327' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "13bebb76-41f5-4f38-b05e-545f7fa0c450"}]': finished
Dec 15 10:36:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Dec 15 10:36:27 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Dec 15 10:36:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 15 10:36:27 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 15 10:36:27 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 15 10:36:27 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 15 10:36:27 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 15 10:36:27 compute-0 charming_borg[81068]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Dec 15 10:36:27 compute-0 charming_borg[81068]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec 15 10:36:27 compute-0 charming_borg[81068]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 15 10:36:27 compute-0 charming_borg[81068]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 15 10:36:27 compute-0 charming_borg[81068]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Dec 15 10:36:27 compute-0 lvm[81130]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 15 10:36:27 compute-0 lvm[81130]: VG ceph_vg0 finished
Dec 15 10:36:27 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/4075126690' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7690eca0-4e87-4157-a045-1912448da925"}]: dispatch
Dec 15 10:36:27 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/4075126690' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7690eca0-4e87-4157-a045-1912448da925"}]': finished
Dec 15 10:36:27 compute-0 ceph-mon[74356]: osdmap e4: 1 total, 0 up, 1 in
Dec 15 10:36:27 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:27 compute-0 ceph-mon[74356]: from='client.? 192.168.122.101:0/549754327' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "13bebb76-41f5-4f38-b05e-545f7fa0c450"}]: dispatch
Dec 15 10:36:27 compute-0 ceph-mon[74356]: from='client.? 192.168.122.101:0/549754327' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "13bebb76-41f5-4f38-b05e-545f7fa0c450"}]': finished
Dec 15 10:36:27 compute-0 ceph-mon[74356]: osdmap e5: 2 total, 0 up, 2 in
Dec 15 10:36:27 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:27 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 15 10:36:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 15 10:36:27 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4137503366' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 15 10:36:27 compute-0 charming_borg[81068]:  stderr: got monmap epoch 1
Dec 15 10:36:27 compute-0 charming_borg[81068]: --> Creating keyring file for osd.0
Dec 15 10:36:27 compute-0 charming_borg[81068]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Dec 15 10:36:27 compute-0 charming_borg[81068]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Dec 15 10:36:27 compute-0 charming_borg[81068]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 7690eca0-4e87-4157-a045-1912448da925 --setuser ceph --setgroup ceph
Dec 15 10:36:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 15 10:36:27 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2575506037' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 15 10:36:28 compute-0 ceph-mon[74356]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:28 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/4137503366' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 15 10:36:28 compute-0 ceph-mon[74356]: from='client.? 192.168.122.101:0/2575506037' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 15 10:36:28 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:29 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 15 10:36:29 compute-0 ceph-mon[74356]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 15 10:36:30 compute-0 charming_borg[81068]:  stderr: 2025-12-15T10:36:27.775+0000 7f379da55740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Dec 15 10:36:30 compute-0 charming_borg[81068]:  stderr: 2025-12-15T10:36:28.037+0000 7f379da55740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Dec 15 10:36:30 compute-0 charming_borg[81068]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec 15 10:36:30 compute-0 charming_borg[81068]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 15 10:36:30 compute-0 charming_borg[81068]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 15 10:36:30 compute-0 ceph-mon[74356]: pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:30 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:30 compute-0 charming_borg[81068]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 15 10:36:30 compute-0 charming_borg[81068]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 15 10:36:30 compute-0 charming_borg[81068]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 15 10:36:30 compute-0 charming_borg[81068]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 15 10:36:30 compute-0 charming_borg[81068]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 15 10:36:30 compute-0 charming_borg[81068]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec 15 10:36:30 compute-0 systemd[1]: libpod-31be8a159620b019c9532491833ff6443b1e9782d7e2491b38da81642669501b.scope: Deactivated successfully.
Dec 15 10:36:30 compute-0 systemd[1]: libpod-31be8a159620b019c9532491833ff6443b1e9782d7e2491b38da81642669501b.scope: Consumed 2.056s CPU time.
Dec 15 10:36:30 compute-0 podman[81052]: 2025-12-15 10:36:30.984676649 +0000 UTC m=+4.912971618 container died 31be8a159620b019c9532491833ff6443b1e9782d7e2491b38da81642669501b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_borg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:36:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cf07fde13df0482e779d43c33d065b1d936ed472d4b2400741f799257c02002-merged.mount: Deactivated successfully.
Dec 15 10:36:31 compute-0 podman[81052]: 2025-12-15 10:36:31.078709566 +0000 UTC m=+5.007004545 container remove 31be8a159620b019c9532491833ff6443b1e9782d7e2491b38da81642669501b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_borg, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 15 10:36:31 compute-0 systemd[1]: libpod-conmon-31be8a159620b019c9532491833ff6443b1e9782d7e2491b38da81642669501b.scope: Deactivated successfully.
Dec 15 10:36:31 compute-0 sudo[80947]: pam_unix(sudo:session): session closed for user root
Dec 15 10:36:31 compute-0 sudo[82071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:36:31 compute-0 sudo[82071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:36:31 compute-0 sudo[82071]: pam_unix(sudo:session): session closed for user root
Dec 15 10:36:31 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:36:31 compute-0 sudo[82096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 77365f67-614e-5a8d-b658-640395550c79 -- lvm list --format json
Dec 15 10:36:31 compute-0 sudo[82096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:36:31 compute-0 podman[82161]: 2025-12-15 10:36:31.61050272 +0000 UTC m=+0.049405050 container create fbe8db5b7ef2fe667c3467fe752682a368614216a8316d3894a9e3cb9cff03b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 15 10:36:31 compute-0 systemd[1]: Started libpod-conmon-fbe8db5b7ef2fe667c3467fe752682a368614216a8316d3894a9e3cb9cff03b2.scope.
Dec 15 10:36:31 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:36:31 compute-0 podman[82161]: 2025-12-15 10:36:31.584328891 +0000 UTC m=+0.023231231 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:36:31 compute-0 podman[82161]: 2025-12-15 10:36:31.689544496 +0000 UTC m=+0.128446816 container init fbe8db5b7ef2fe667c3467fe752682a368614216a8316d3894a9e3cb9cff03b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Dec 15 10:36:31 compute-0 podman[82161]: 2025-12-15 10:36:31.696058195 +0000 UTC m=+0.134960485 container start fbe8db5b7ef2fe667c3467fe752682a368614216a8316d3894a9e3cb9cff03b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 15 10:36:31 compute-0 podman[82161]: 2025-12-15 10:36:31.699719676 +0000 UTC m=+0.138621986 container attach fbe8db5b7ef2fe667c3467fe752682a368614216a8316d3894a9e3cb9cff03b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Dec 15 10:36:31 compute-0 compassionate_liskov[82177]: 167 167
Dec 15 10:36:31 compute-0 systemd[1]: libpod-fbe8db5b7ef2fe667c3467fe752682a368614216a8316d3894a9e3cb9cff03b2.scope: Deactivated successfully.
Dec 15 10:36:31 compute-0 podman[82161]: 2025-12-15 10:36:31.700717013 +0000 UTC m=+0.139619303 container died fbe8db5b7ef2fe667c3467fe752682a368614216a8316d3894a9e3cb9cff03b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:36:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe5c465728db5b92792e079684b368067d7dec15c84dee95516d60fae66897dc-merged.mount: Deactivated successfully.
Dec 15 10:36:32 compute-0 podman[82161]: 2025-12-15 10:36:32.222454811 +0000 UTC m=+0.661357121 container remove fbe8db5b7ef2fe667c3467fe752682a368614216a8316d3894a9e3cb9cff03b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_liskov, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:36:32 compute-0 systemd[1]: libpod-conmon-fbe8db5b7ef2fe667c3467fe752682a368614216a8316d3894a9e3cb9cff03b2.scope: Deactivated successfully.
Dec 15 10:36:32 compute-0 podman[82201]: 2025-12-15 10:36:32.366730861 +0000 UTC m=+0.023164468 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:36:32 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:32 compute-0 ceph-mon[74356]: pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:32 compute-0 podman[82201]: 2025-12-15 10:36:32.811353736 +0000 UTC m=+0.467787323 container create f463c398abe67d28ad824bef3fa0733cae79d8663e67cbabe8cd57fcd22e165e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_sutherland, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:36:32 compute-0 systemd[1]: Started libpod-conmon-f463c398abe67d28ad824bef3fa0733cae79d8663e67cbabe8cd57fcd22e165e.scope.
Dec 15 10:36:32 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb4e376540b87f56b3f0f170d0551564ca7acb8d352b6e213a98df0855f51bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb4e376540b87f56b3f0f170d0551564ca7acb8d352b6e213a98df0855f51bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb4e376540b87f56b3f0f170d0551564ca7acb8d352b6e213a98df0855f51bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb4e376540b87f56b3f0f170d0551564ca7acb8d352b6e213a98df0855f51bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:32 compute-0 podman[82201]: 2025-12-15 10:36:32.890454013 +0000 UTC m=+0.546887640 container init f463c398abe67d28ad824bef3fa0733cae79d8663e67cbabe8cd57fcd22e165e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 15 10:36:32 compute-0 podman[82201]: 2025-12-15 10:36:32.899332347 +0000 UTC m=+0.555765934 container start f463c398abe67d28ad824bef3fa0733cae79d8663e67cbabe8cd57fcd22e165e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_sutherland, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 15 10:36:32 compute-0 podman[82201]: 2025-12-15 10:36:32.90234617 +0000 UTC m=+0.558779777 container attach f463c398abe67d28ad824bef3fa0733cae79d8663e67cbabe8cd57fcd22e165e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_sutherland, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 15 10:36:33 compute-0 nice_sutherland[82217]: {
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:     "0": [
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:         {
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:             "devices": [
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:                 "/dev/loop3"
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:             ],
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:             "lv_name": "ceph_lv0",
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:             "lv_size": "21470642176",
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MCSOGR-6V69-UVGq-8FpK-DBjb-LyDE-FLUPxJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=77365f67-614e-5a8d-b658-640395550c79,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7690eca0-4e87-4157-a045-1912448da925,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:             "lv_uuid": "MCSOGR-6V69-UVGq-8FpK-DBjb-LyDE-FLUPxJ",
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:             "name": "ceph_lv0",
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:             "tags": {
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:                 "ceph.block_uuid": "MCSOGR-6V69-UVGq-8FpK-DBjb-LyDE-FLUPxJ",
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:                 "ceph.cephx_lockbox_secret": "",
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:                 "ceph.cluster_fsid": "77365f67-614e-5a8d-b658-640395550c79",
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:                 "ceph.cluster_name": "ceph",
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:                 "ceph.crush_device_class": "",
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:                 "ceph.encrypted": "0",
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:                 "ceph.osd_fsid": "7690eca0-4e87-4157-a045-1912448da925",
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:                 "ceph.osd_id": "0",
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:                 "ceph.type": "block",
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:                 "ceph.vdo": "0",
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:                 "ceph.with_tpm": "0"
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:             },
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:             "type": "block",
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:             "vg_name": "ceph_vg0"
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:         }
Dec 15 10:36:33 compute-0 nice_sutherland[82217]:     ]
Dec 15 10:36:33 compute-0 nice_sutherland[82217]: }
Dec 15 10:36:33 compute-0 systemd[1]: libpod-f463c398abe67d28ad824bef3fa0733cae79d8663e67cbabe8cd57fcd22e165e.scope: Deactivated successfully.
Dec 15 10:36:33 compute-0 podman[82201]: 2025-12-15 10:36:33.185889013 +0000 UTC m=+0.842322600 container died f463c398abe67d28ad824bef3fa0733cae79d8663e67cbabe8cd57fcd22e165e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec 15 10:36:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec 15 10:36:33 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 15 10:36:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:36:33 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:36:33 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-1
Dec 15 10:36:33 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-1
Dec 15 10:36:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebb4e376540b87f56b3f0f170d0551564ca7acb8d352b6e213a98df0855f51bd-merged.mount: Deactivated successfully.
Dec 15 10:36:33 compute-0 podman[82201]: 2025-12-15 10:36:33.231980961 +0000 UTC m=+0.888414548 container remove f463c398abe67d28ad824bef3fa0733cae79d8663e67cbabe8cd57fcd22e165e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Dec 15 10:36:33 compute-0 systemd[1]: libpod-conmon-f463c398abe67d28ad824bef3fa0733cae79d8663e67cbabe8cd57fcd22e165e.scope: Deactivated successfully.
Dec 15 10:36:33 compute-0 sudo[82096]: pam_unix(sudo:session): session closed for user root
Dec 15 10:36:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec 15 10:36:33 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 15 10:36:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:36:33 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:36:33 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Dec 15 10:36:33 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Dec 15 10:36:33 compute-0 sudo[82237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:36:33 compute-0 sudo[82237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:36:33 compute-0 sudo[82237]: pam_unix(sudo:session): session closed for user root
Dec 15 10:36:33 compute-0 sudo[82262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:36:33 compute-0 sudo[82262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:36:33 compute-0 podman[82323]: 2025-12-15 10:36:33.788144086 +0000 UTC m=+0.047441216 container create b792408e9d0b2171245bcdb3d199dc7f169b6228c576a90e829fce15da866a1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_elion, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:36:33 compute-0 systemd[1]: Started libpod-conmon-b792408e9d0b2171245bcdb3d199dc7f169b6228c576a90e829fce15da866a1c.scope.
Dec 15 10:36:33 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:36:33 compute-0 podman[82323]: 2025-12-15 10:36:33.85151253 +0000 UTC m=+0.110809680 container init b792408e9d0b2171245bcdb3d199dc7f169b6228c576a90e829fce15da866a1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_elion, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 15 10:36:33 compute-0 podman[82323]: 2025-12-15 10:36:33.761885793 +0000 UTC m=+0.021182943 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:36:33 compute-0 podman[82323]: 2025-12-15 10:36:33.857655829 +0000 UTC m=+0.116952969 container start b792408e9d0b2171245bcdb3d199dc7f169b6228c576a90e829fce15da866a1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_elion, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:36:33 compute-0 xenodochial_elion[82339]: 167 167
Dec 15 10:36:33 compute-0 systemd[1]: libpod-b792408e9d0b2171245bcdb3d199dc7f169b6228c576a90e829fce15da866a1c.scope: Deactivated successfully.
Dec 15 10:36:33 compute-0 podman[82323]: 2025-12-15 10:36:33.898100311 +0000 UTC m=+0.157397461 container attach b792408e9d0b2171245bcdb3d199dc7f169b6228c576a90e829fce15da866a1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_elion, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:36:33 compute-0 podman[82323]: 2025-12-15 10:36:33.89875746 +0000 UTC m=+0.158054590 container died b792408e9d0b2171245bcdb3d199dc7f169b6228c576a90e829fce15da866a1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_elion, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 15 10:36:33 compute-0 ceph-mon[74356]: pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:33 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 15 10:36:33 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:36:33 compute-0 ceph-mon[74356]: Deploying daemon osd.1 on compute-1
Dec 15 10:36:33 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 15 10:36:33 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:36:33 compute-0 ceph-mon[74356]: Deploying daemon osd.0 on compute-0
Dec 15 10:36:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b154c4f91a3ebe85f0f384b1ae16a57b0505e1c3fd87f03de227cf749aea2c7-merged.mount: Deactivated successfully.
Dec 15 10:36:33 compute-0 podman[82323]: 2025-12-15 10:36:33.948286352 +0000 UTC m=+0.207583482 container remove b792408e9d0b2171245bcdb3d199dc7f169b6228c576a90e829fce15da866a1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_elion, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 15 10:36:33 compute-0 systemd[1]: libpod-conmon-b792408e9d0b2171245bcdb3d199dc7f169b6228c576a90e829fce15da866a1c.scope: Deactivated successfully.
Dec 15 10:36:34 compute-0 podman[82372]: 2025-12-15 10:36:34.163610678 +0000 UTC m=+0.022847339 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:36:34 compute-0 podman[82372]: 2025-12-15 10:36:34.351785366 +0000 UTC m=+0.211021997 container create 7219ebba47c3952152b29573029507e32ce572872bd2c6ac3e881c3f64fe3aa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0-activate-test, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 15 10:36:34 compute-0 systemd[1]: Started libpod-conmon-7219ebba47c3952152b29573029507e32ce572872bd2c6ac3e881c3f64fe3aa7.scope.
Dec 15 10:36:34 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a09b5d2563dc17d7c68914fe71812e5871577913d5f060dcf0a7c04691a0a01/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a09b5d2563dc17d7c68914fe71812e5871577913d5f060dcf0a7c04691a0a01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a09b5d2563dc17d7c68914fe71812e5871577913d5f060dcf0a7c04691a0a01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a09b5d2563dc17d7c68914fe71812e5871577913d5f060dcf0a7c04691a0a01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a09b5d2563dc17d7c68914fe71812e5871577913d5f060dcf0a7c04691a0a01/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:34 compute-0 podman[82372]: 2025-12-15 10:36:34.488696554 +0000 UTC m=+0.347933265 container init 7219ebba47c3952152b29573029507e32ce572872bd2c6ac3e881c3f64fe3aa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0-activate-test, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:36:34 compute-0 podman[82372]: 2025-12-15 10:36:34.496273563 +0000 UTC m=+0.355510194 container start 7219ebba47c3952152b29573029507e32ce572872bd2c6ac3e881c3f64fe3aa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:36:34 compute-0 podman[82372]: 2025-12-15 10:36:34.51798526 +0000 UTC m=+0.377222001 container attach 7219ebba47c3952152b29573029507e32ce572872bd2c6ac3e881c3f64fe3aa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0-activate-test, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:36:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0-activate-test[82388]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 15 10:36:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0-activate-test[82388]:                             [--no-systemd] [--no-tmpfs]
Dec 15 10:36:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0-activate-test[82388]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 15 10:36:34 compute-0 systemd[1]: libpod-7219ebba47c3952152b29573029507e32ce572872bd2c6ac3e881c3f64fe3aa7.scope: Deactivated successfully.
Dec 15 10:36:34 compute-0 podman[82372]: 2025-12-15 10:36:34.702756275 +0000 UTC m=+0.561992916 container died 7219ebba47c3952152b29573029507e32ce572872bd2c6ac3e881c3f64fe3aa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0-activate-test, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0)
Dec 15 10:36:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a09b5d2563dc17d7c68914fe71812e5871577913d5f060dcf0a7c04691a0a01-merged.mount: Deactivated successfully.
Dec 15 10:36:34 compute-0 podman[82372]: 2025-12-15 10:36:34.765320867 +0000 UTC m=+0.624557498 container remove 7219ebba47c3952152b29573029507e32ce572872bd2c6ac3e881c3f64fe3aa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 15 10:36:34 compute-0 systemd[1]: libpod-conmon-7219ebba47c3952152b29573029507e32ce572872bd2c6ac3e881c3f64fe3aa7.scope: Deactivated successfully.
Dec 15 10:36:34 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:35 compute-0 systemd[1]: Reloading.
Dec 15 10:36:35 compute-0 systemd-rc-local-generator[82450]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:36:35 compute-0 systemd-sysv-generator[82456]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:36:35 compute-0 systemd[1]: Reloading.
Dec 15 10:36:35 compute-0 systemd-rc-local-generator[82493]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:36:35 compute-0 systemd-sysv-generator[82497]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:36:35 compute-0 systemd[1]: Starting Ceph osd.0 for 77365f67-614e-5a8d-b658-640395550c79...
Dec 15 10:36:35 compute-0 podman[82549]: 2025-12-15 10:36:35.770836206 +0000 UTC m=+0.036965597 container create d6db9ccea37077fb38affc77fe3ade7dc3eb20d6e62ec878c2408e81fd2c9044 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0-activate, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:36:35 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57fc66bfae8af49e2dae97d9a78a488b0b94af5779aa07bffd8d3f94c6952f45/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57fc66bfae8af49e2dae97d9a78a488b0b94af5779aa07bffd8d3f94c6952f45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57fc66bfae8af49e2dae97d9a78a488b0b94af5779aa07bffd8d3f94c6952f45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57fc66bfae8af49e2dae97d9a78a488b0b94af5779aa07bffd8d3f94c6952f45/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57fc66bfae8af49e2dae97d9a78a488b0b94af5779aa07bffd8d3f94c6952f45/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:35 compute-0 podman[82549]: 2025-12-15 10:36:35.842076607 +0000 UTC m=+0.108206268 container init d6db9ccea37077fb38affc77fe3ade7dc3eb20d6e62ec878c2408e81fd2c9044 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0-activate, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 15 10:36:35 compute-0 podman[82549]: 2025-12-15 10:36:35.753340405 +0000 UTC m=+0.019469846 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:36:35 compute-0 podman[82549]: 2025-12-15 10:36:35.851888287 +0000 UTC m=+0.118017688 container start d6db9ccea37077fb38affc77fe3ade7dc3eb20d6e62ec878c2408e81fd2c9044 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0-activate, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:36:35 compute-0 podman[82549]: 2025-12-15 10:36:35.854986392 +0000 UTC m=+0.121115783 container attach d6db9ccea37077fb38affc77fe3ade7dc3eb20d6e62ec878c2408e81fd2c9044 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 15 10:36:35 compute-0 ceph-mon[74356]: pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0-activate[82565]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 15 10:36:36 compute-0 bash[82549]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 15 10:36:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0-activate[82565]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 15 10:36:36 compute-0 bash[82549]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 15 10:36:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:36:36 compute-0 lvm[82646]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 15 10:36:36 compute-0 lvm[82646]: VG ceph_vg0 finished
Dec 15 10:36:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0-activate[82565]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 15 10:36:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0-activate[82565]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 15 10:36:36 compute-0 bash[82549]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 15 10:36:36 compute-0 bash[82549]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 15 10:36:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0-activate[82565]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 15 10:36:36 compute-0 bash[82549]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 15 10:36:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0-activate[82565]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 15 10:36:36 compute-0 bash[82549]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 15 10:36:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0-activate[82565]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 15 10:36:36 compute-0 bash[82549]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 15 10:36:36 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:37 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0-activate[82565]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 15 10:36:37 compute-0 bash[82549]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 15 10:36:37 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0-activate[82565]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 15 10:36:37 compute-0 bash[82549]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 15 10:36:37 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0-activate[82565]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 15 10:36:37 compute-0 bash[82549]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 15 10:36:37 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0-activate[82565]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 15 10:36:37 compute-0 bash[82549]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 15 10:36:37 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0-activate[82565]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 15 10:36:37 compute-0 bash[82549]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 15 10:36:37 compute-0 systemd[1]: libpod-d6db9ccea37077fb38affc77fe3ade7dc3eb20d6e62ec878c2408e81fd2c9044.scope: Deactivated successfully.
Dec 15 10:36:37 compute-0 systemd[1]: libpod-d6db9ccea37077fb38affc77fe3ade7dc3eb20d6e62ec878c2408e81fd2c9044.scope: Consumed 1.260s CPU time.
Dec 15 10:36:37 compute-0 podman[82759]: 2025-12-15 10:36:37.100917248 +0000 UTC m=+0.022544312 container died d6db9ccea37077fb38affc77fe3ade7dc3eb20d6e62ec878c2408e81fd2c9044 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0-activate, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:36:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-57fc66bfae8af49e2dae97d9a78a488b0b94af5779aa07bffd8d3f94c6952f45-merged.mount: Deactivated successfully.
Dec 15 10:36:37 compute-0 podman[82759]: 2025-12-15 10:36:37.226667007 +0000 UTC m=+0.148294071 container remove d6db9ccea37077fb38affc77fe3ade7dc3eb20d6e62ec878c2408e81fd2c9044 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0-activate, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:36:37 compute-0 podman[82818]: 2025-12-15 10:36:37.436942665 +0000 UTC m=+0.024405522 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:36:37 compute-0 podman[82818]: 2025-12-15 10:36:37.537722278 +0000 UTC m=+0.125185115 container create e17a7f3bd182609d918941b5a06013503f76f7f1dc024608d0c649dec3db9a16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 15 10:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/777b635f0ea35c05b10e3ffc7dbd5e24f6d3fa7cb8b316e1a5308a1e317e60b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/777b635f0ea35c05b10e3ffc7dbd5e24f6d3fa7cb8b316e1a5308a1e317e60b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/777b635f0ea35c05b10e3ffc7dbd5e24f6d3fa7cb8b316e1a5308a1e317e60b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/777b635f0ea35c05b10e3ffc7dbd5e24f6d3fa7cb8b316e1a5308a1e317e60b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/777b635f0ea35c05b10e3ffc7dbd5e24f6d3fa7cb8b316e1a5308a1e317e60b8/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:37 compute-0 podman[82818]: 2025-12-15 10:36:37.633340279 +0000 UTC m=+0.220803136 container init e17a7f3bd182609d918941b5a06013503f76f7f1dc024608d0c649dec3db9a16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 15 10:36:37 compute-0 podman[82818]: 2025-12-15 10:36:37.638661976 +0000 UTC m=+0.226124843 container start e17a7f3bd182609d918941b5a06013503f76f7f1dc024608d0c649dec3db9a16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:36:37 compute-0 bash[82818]: e17a7f3bd182609d918941b5a06013503f76f7f1dc024608d0c649dec3db9a16
Dec 15 10:36:37 compute-0 systemd[1]: Started Ceph osd.0 for 77365f67-614e-5a8d-b658-640395550c79.
Dec 15 10:36:37 compute-0 ceph-osd[82838]: set uid:gid to 167:167 (ceph:ceph)
Dec 15 10:36:37 compute-0 ceph-osd[82838]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Dec 15 10:36:37 compute-0 ceph-osd[82838]: pidfile_write: ignore empty --pid-file
Dec 15 10:36:37 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 15 10:36:37 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 15 10:36:37 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 15 10:36:37 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 15 10:36:37 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) close
Dec 15 10:36:37 compute-0 sudo[82262]: pam_unix(sudo:session): session closed for user root
Dec 15 10:36:37 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:36:37 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:37 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:36:37 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:37 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:36:37 compute-0 sudo[82850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:36:37 compute-0 sudo[82850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:36:37 compute-0 sudo[82850]: pam_unix(sudo:session): session closed for user root
Dec 15 10:36:37 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:37 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:36:37 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:37 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 15 10:36:37 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 15 10:36:37 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 15 10:36:37 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 15 10:36:37 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) close
Dec 15 10:36:37 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 15 10:36:37 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 15 10:36:37 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 15 10:36:37 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 15 10:36:37 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) close
Dec 15 10:36:37 compute-0 sudo[82875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 77365f67-614e-5a8d-b658-640395550c79 -- raw list --format json
Dec 15 10:36:37 compute-0 sudo[82875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:36:37 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 15 10:36:37 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 15 10:36:37 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 15 10:36:37 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 15 10:36:37 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) close
Dec 15 10:36:38 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 15 10:36:38 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 15 10:36:38 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 15 10:36:38 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 15 10:36:38 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) close
Dec 15 10:36:38 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 15 10:36:38 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 15 10:36:38 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 15 10:36:38 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 15 10:36:38 compute-0 ceph-osd[82838]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 15 10:36:38 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9bc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 15 10:36:38 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9bc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 15 10:36:38 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9bc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 15 10:36:38 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9bc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 15 10:36:38 compute-0 ceph-osd[82838]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 15 10:36:38 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9bc00 /var/lib/ceph/osd/ceph-0/block) close
Dec 15 10:36:38 compute-0 podman[82953]: 2025-12-15 10:36:38.363944194 +0000 UTC m=+0.047717554 container create 1c7374fbbc1b51bf00982a47167e7c861438b4b2cb5796c4d86167c3a7c9fca3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_morse, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:36:38 compute-0 systemd[1]: Started libpod-conmon-1c7374fbbc1b51bf00982a47167e7c861438b4b2cb5796c4d86167c3a7c9fca3.scope.
Dec 15 10:36:38 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:36:38 compute-0 podman[82953]: 2025-12-15 10:36:38.344061167 +0000 UTC m=+0.027834547 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:36:38 compute-0 podman[82953]: 2025-12-15 10:36:38.45355342 +0000 UTC m=+0.137326840 container init 1c7374fbbc1b51bf00982a47167e7c861438b4b2cb5796c4d86167c3a7c9fca3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_morse, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:36:38 compute-0 podman[82953]: 2025-12-15 10:36:38.463361901 +0000 UTC m=+0.147135281 container start 1c7374fbbc1b51bf00982a47167e7c861438b4b2cb5796c4d86167c3a7c9fca3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec 15 10:36:38 compute-0 podman[82953]: 2025-12-15 10:36:38.467271138 +0000 UTC m=+0.151044508 container attach 1c7374fbbc1b51bf00982a47167e7c861438b4b2cb5796c4d86167c3a7c9fca3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:36:38 compute-0 strange_morse[82970]: 167 167
Dec 15 10:36:38 compute-0 systemd[1]: libpod-1c7374fbbc1b51bf00982a47167e7c861438b4b2cb5796c4d86167c3a7c9fca3.scope: Deactivated successfully.
Dec 15 10:36:38 compute-0 podman[82953]: 2025-12-15 10:36:38.472118461 +0000 UTC m=+0.155891861 container died 1c7374fbbc1b51bf00982a47167e7c861438b4b2cb5796c4d86167c3a7c9fca3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_morse, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 15 10:36:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-f859a091e591d3e33272c57e9db7b1741e28d6a54b4c0f55835236b1f33695ac-merged.mount: Deactivated successfully.
Dec 15 10:36:38 compute-0 podman[82953]: 2025-12-15 10:36:38.515000961 +0000 UTC m=+0.198774321 container remove 1c7374fbbc1b51bf00982a47167e7c861438b4b2cb5796c4d86167c3a7c9fca3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 15 10:36:38 compute-0 systemd[1]: libpod-conmon-1c7374fbbc1b51bf00982a47167e7c861438b4b2cb5796c4d86167c3a7c9fca3.scope: Deactivated successfully.
Dec 15 10:36:38 compute-0 ceph-osd[82838]: bdev(0x55b8a0e9b800 /var/lib/ceph/osd/ceph-0/block) close
Dec 15 10:36:38 compute-0 podman[82995]: 2025-12-15 10:36:38.682740517 +0000 UTC m=+0.045984776 container create 27505b58f65847d0665bb99d36c80d2566b45b6dafb50a10c35395b2388a1824 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_golick, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:36:38 compute-0 ceph-mon[74356]: pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:38 compute-0 systemd[1]: Started libpod-conmon-27505b58f65847d0665bb99d36c80d2566b45b6dafb50a10c35395b2388a1824.scope.
Dec 15 10:36:38 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:38 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:38 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:38 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:38 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef68097da3779d5b35d6817c3a98e3d7bbdbd538144d71de9ddf316af3b5c41c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef68097da3779d5b35d6817c3a98e3d7bbdbd538144d71de9ddf316af3b5c41c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef68097da3779d5b35d6817c3a98e3d7bbdbd538144d71de9ddf316af3b5c41c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef68097da3779d5b35d6817c3a98e3d7bbdbd538144d71de9ddf316af3b5c41c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:38 compute-0 podman[82995]: 2025-12-15 10:36:38.664009162 +0000 UTC m=+0.027253431 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:36:38 compute-0 podman[82995]: 2025-12-15 10:36:38.762334088 +0000 UTC m=+0.125578367 container init 27505b58f65847d0665bb99d36c80d2566b45b6dafb50a10c35395b2388a1824 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 15 10:36:38 compute-0 podman[82995]: 2025-12-15 10:36:38.769159395 +0000 UTC m=+0.132403654 container start 27505b58f65847d0665bb99d36c80d2566b45b6dafb50a10c35395b2388a1824 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 15 10:36:38 compute-0 podman[82995]: 2025-12-15 10:36:38.773396662 +0000 UTC m=+0.136640951 container attach 27505b58f65847d0665bb99d36c80d2566b45b6dafb50a10c35395b2388a1824 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_golick, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 15 10:36:38 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:38 compute-0 ceph-osd[82838]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Dec 15 10:36:38 compute-0 ceph-osd[82838]: load: jerasure load: lrc 
Dec 15 10:36:38 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 15 10:36:38 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 15 10:36:38 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 15 10:36:38 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 15 10:36:38 compute-0 ceph-osd[82838]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 15 10:36:38 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 15 10:36:39 compute-0 ceph-osd[82838]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 15 10:36:39 compute-0 ceph-osd[82838]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 15 10:36:39 compute-0 lvm[83104]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 15 10:36:39 compute-0 lvm[83104]: VG ceph_vg0 finished
Dec 15 10:36:39 compute-0 elated_golick[83013]: {}
Dec 15 10:36:39 compute-0 systemd[1]: libpod-27505b58f65847d0665bb99d36c80d2566b45b6dafb50a10c35395b2388a1824.scope: Deactivated successfully.
Dec 15 10:36:39 compute-0 systemd[1]: libpod-27505b58f65847d0665bb99d36c80d2566b45b6dafb50a10c35395b2388a1824.scope: Consumed 1.235s CPU time.
Dec 15 10:36:39 compute-0 podman[82995]: 2025-12-15 10:36:39.554758684 +0000 UTC m=+0.918002943 container died 27505b58f65847d0665bb99d36c80d2566b45b6dafb50a10c35395b2388a1824 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_golick, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 15 10:36:39 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d36c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d37000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d37000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d37000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d37000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluefs mount
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluefs mount shared_bdev_used = 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: RocksDB version: 7.9.2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Git sha 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: DB SUMMARY
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: DB Session ID:  YR341F293IL70ABQKXC0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: CURRENT file:  CURRENT
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: IDENTITY file:  IDENTITY
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                         Options.error_if_exists: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                       Options.create_if_missing: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                         Options.paranoid_checks: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                                     Options.env: 0x55b8a1d07dc0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                                Options.info_log: 0x55b8a1d0b7a0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.max_file_opening_threads: 16
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                              Options.statistics: (nil)
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                               Options.use_fsync: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                       Options.max_log_file_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                         Options.allow_fallocate: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.use_direct_reads: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.create_missing_column_families: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                              Options.db_log_dir: 
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                                 Options.wal_dir: db.wal
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.advise_random_on_open: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.write_buffer_manager: 0x55b8a1e00a00
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                            Options.rate_limiter: (nil)
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.unordered_write: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                               Options.row_cache: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                              Options.wal_filter: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.allow_ingest_behind: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.two_write_queues: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.manual_wal_flush: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.wal_compression: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.atomic_flush: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                 Options.log_readahead_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.allow_data_in_errors: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.db_host_id: __hostname__
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.max_background_jobs: 4
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.max_background_compactions: -1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.max_subcompactions: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.max_open_files: -1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.bytes_per_sync: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.max_background_flushes: -1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Compression algorithms supported:
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         kZSTD supported: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         kXpressCompression supported: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         kBZip2Compression supported: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         kLZ4Compression supported: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         kZlibCompression supported: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         kLZ4HCCompression supported: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         kSnappyCompression supported: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.sst_partitioner_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b8a1d0bb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b8a0f31350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.write_buffer_size: 16777216
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.max_write_buffer_number: 64
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.compression: LZ4
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.num_levels: 7
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.enabled: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.arena_block_size: 1048576
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.disable_auto_compactions: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.inplace_update_support: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                           Options.bloom_locality: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_successive_merges: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.paranoid_file_checks: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.force_consistency_checks: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.report_bg_io_stats: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                               Options.ttl: 2592000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                       Options.enable_blob_files: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                           Options.min_blob_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.blob_file_size: 268435456
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.blob_file_starting_level: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:           Options.merge_operator: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.sst_partitioner_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b8a1d0bb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b8a0f31350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.write_buffer_size: 16777216
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.max_write_buffer_number: 64
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.compression: LZ4
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.num_levels: 7
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.enabled: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.arena_block_size: 1048576
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.disable_auto_compactions: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.inplace_update_support: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                           Options.bloom_locality: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_successive_merges: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.paranoid_file_checks: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.force_consistency_checks: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.report_bg_io_stats: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                               Options.ttl: 2592000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                       Options.enable_blob_files: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                           Options.min_blob_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.blob_file_size: 268435456
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.blob_file_starting_level: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:           Options.merge_operator: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.sst_partitioner_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b8a1d0bb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b8a0f31350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.write_buffer_size: 16777216
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.max_write_buffer_number: 64
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.compression: LZ4
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.num_levels: 7
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.enabled: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.arena_block_size: 1048576
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.disable_auto_compactions: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.inplace_update_support: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                           Options.bloom_locality: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_successive_merges: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.paranoid_file_checks: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.force_consistency_checks: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.report_bg_io_stats: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                               Options.ttl: 2592000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                       Options.enable_blob_files: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                           Options.min_blob_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.blob_file_size: 268435456
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.blob_file_starting_level: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:           Options.merge_operator: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.sst_partitioner_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b8a1d0bb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b8a0f31350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.write_buffer_size: 16777216
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.max_write_buffer_number: 64
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.compression: LZ4
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.num_levels: 7
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.enabled: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.arena_block_size: 1048576
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.disable_auto_compactions: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.inplace_update_support: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                           Options.bloom_locality: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_successive_merges: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.paranoid_file_checks: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.force_consistency_checks: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.report_bg_io_stats: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                               Options.ttl: 2592000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                       Options.enable_blob_files: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                           Options.min_blob_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.blob_file_size: 268435456
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.blob_file_starting_level: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:           Options.merge_operator: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.sst_partitioner_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b8a1d0bb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b8a0f31350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.write_buffer_size: 16777216
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.max_write_buffer_number: 64
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.compression: LZ4
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.num_levels: 7
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.enabled: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.arena_block_size: 1048576
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.disable_auto_compactions: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.inplace_update_support: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                           Options.bloom_locality: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_successive_merges: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.paranoid_file_checks: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.force_consistency_checks: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.report_bg_io_stats: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                               Options.ttl: 2592000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                       Options.enable_blob_files: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                           Options.min_blob_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.blob_file_size: 268435456
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.blob_file_starting_level: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:           Options.merge_operator: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.sst_partitioner_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b8a1d0bb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b8a0f31350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.write_buffer_size: 16777216
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.max_write_buffer_number: 64
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.compression: LZ4
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.num_levels: 7
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.enabled: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.arena_block_size: 1048576
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.disable_auto_compactions: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.inplace_update_support: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                           Options.bloom_locality: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_successive_merges: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.paranoid_file_checks: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.force_consistency_checks: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.report_bg_io_stats: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                               Options.ttl: 2592000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                       Options.enable_blob_files: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                           Options.min_blob_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.blob_file_size: 268435456
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.blob_file_starting_level: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:           Options.merge_operator: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.sst_partitioner_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b8a1d0bb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b8a0f31350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.write_buffer_size: 16777216
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.max_write_buffer_number: 64
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.compression: LZ4
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.num_levels: 7
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.enabled: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.arena_block_size: 1048576
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.disable_auto_compactions: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.inplace_update_support: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                           Options.bloom_locality: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_successive_merges: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.paranoid_file_checks: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.force_consistency_checks: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.report_bg_io_stats: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                               Options.ttl: 2592000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                       Options.enable_blob_files: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                           Options.min_blob_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.blob_file_size: 268435456
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.blob_file_starting_level: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:           Options.merge_operator: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.sst_partitioner_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b8a1d0bb80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b8a0f309b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.write_buffer_size: 16777216
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.max_write_buffer_number: 64
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.compression: LZ4
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.num_levels: 7
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.enabled: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.arena_block_size: 1048576
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.disable_auto_compactions: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.inplace_update_support: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                           Options.bloom_locality: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_successive_merges: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.paranoid_file_checks: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.force_consistency_checks: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.report_bg_io_stats: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                               Options.ttl: 2592000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                       Options.enable_blob_files: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                           Options.min_blob_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.blob_file_size: 268435456
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.blob_file_starting_level: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:           Options.merge_operator: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.sst_partitioner_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b8a1d0bb80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b8a0f309b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.write_buffer_size: 16777216
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.max_write_buffer_number: 64
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.compression: LZ4
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.num_levels: 7
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.enabled: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.arena_block_size: 1048576
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.disable_auto_compactions: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.inplace_update_support: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                           Options.bloom_locality: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_successive_merges: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.paranoid_file_checks: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.force_consistency_checks: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.report_bg_io_stats: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                               Options.ttl: 2592000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                       Options.enable_blob_files: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                           Options.min_blob_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.blob_file_size: 268435456
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.blob_file_starting_level: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:           Options.merge_operator: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.sst_partitioner_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b8a1d0bb80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b8a0f309b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.write_buffer_size: 16777216
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.max_write_buffer_number: 64
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.compression: LZ4
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.num_levels: 7
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.enabled: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.arena_block_size: 1048576
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.disable_auto_compactions: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.inplace_update_support: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                           Options.bloom_locality: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_successive_merges: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.paranoid_file_checks: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.force_consistency_checks: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.report_bg_io_stats: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                               Options.ttl: 2592000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                       Options.enable_blob_files: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                           Options.min_blob_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.blob_file_size: 268435456
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.blob_file_starting_level: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c8ae229c-d677-423e-972d-1877a254a271
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765794999728381, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765794999728588, "job": 1, "event": "recovery_finished"}
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: freelist init
Dec 15 10:36:39 compute-0 ceph-osd[82838]: freelist _read_cfg
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluefs umount
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d37000 /var/lib/ceph/osd/ceph-0/block) close
Dec 15 10:36:39 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:39 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:36:39 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef68097da3779d5b35d6817c3a98e3d7bbdbd538144d71de9ddf316af3b5c41c-merged.mount: Deactivated successfully.
Dec 15 10:36:39 compute-0 podman[82995]: 2025-12-15 10:36:39.810151913 +0000 UTC m=+1.173396212 container remove 27505b58f65847d0665bb99d36c80d2566b45b6dafb50a10c35395b2388a1824 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_golick, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:36:39 compute-0 systemd[1]: libpod-conmon-27505b58f65847d0665bb99d36c80d2566b45b6dafb50a10c35395b2388a1824.scope: Deactivated successfully.
Dec 15 10:36:39 compute-0 sudo[82875]: pam_unix(sudo:session): session closed for user root
Dec 15 10:36:39 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:36:39 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:39 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d37000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d37000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d37000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bdev(0x55b8a1d37000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluefs mount
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluefs mount shared_bdev_used = 4718592
Dec 15 10:36:39 compute-0 ceph-osd[82838]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: RocksDB version: 7.9.2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Git sha 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: DB SUMMARY
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: DB Session ID:  YR341F293IL70ABQKXC1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: CURRENT file:  CURRENT
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: IDENTITY file:  IDENTITY
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                         Options.error_if_exists: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                       Options.create_if_missing: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                         Options.paranoid_checks: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                                     Options.env: 0x55b8a1ea4310
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                                Options.info_log: 0x55b8a1d0b940
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.max_file_opening_threads: 16
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                              Options.statistics: (nil)
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                               Options.use_fsync: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                       Options.max_log_file_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                         Options.allow_fallocate: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.use_direct_reads: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.create_missing_column_families: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                              Options.db_log_dir: 
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                                 Options.wal_dir: db.wal
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.advise_random_on_open: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.write_buffer_manager: 0x55b8a1e00a00
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                            Options.rate_limiter: (nil)
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.unordered_write: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                               Options.row_cache: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                              Options.wal_filter: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.allow_ingest_behind: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.two_write_queues: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.manual_wal_flush: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.wal_compression: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.atomic_flush: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                 Options.log_readahead_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.allow_data_in_errors: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.db_host_id: __hostname__
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.max_background_jobs: 4
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.max_background_compactions: -1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.max_subcompactions: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.max_open_files: -1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.bytes_per_sync: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.max_background_flushes: -1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Compression algorithms supported:
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         kZSTD supported: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         kXpressCompression supported: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         kBZip2Compression supported: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         kLZ4Compression supported: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         kZlibCompression supported: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         kLZ4HCCompression supported: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         kSnappyCompression supported: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.sst_partitioner_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b8a1d0b680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b8a0f31350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.write_buffer_size: 16777216
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.max_write_buffer_number: 64
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.compression: LZ4
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.num_levels: 7
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.enabled: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.arena_block_size: 1048576
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.disable_auto_compactions: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.inplace_update_support: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                           Options.bloom_locality: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_successive_merges: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.paranoid_file_checks: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.force_consistency_checks: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.report_bg_io_stats: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                               Options.ttl: 2592000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                       Options.enable_blob_files: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                           Options.min_blob_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.blob_file_size: 268435456
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.blob_file_starting_level: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:           Options.merge_operator: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.sst_partitioner_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b8a1d0b680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b8a0f31350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.write_buffer_size: 16777216
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.max_write_buffer_number: 64
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.compression: LZ4
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.num_levels: 7
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.enabled: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.arena_block_size: 1048576
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.disable_auto_compactions: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.inplace_update_support: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                           Options.bloom_locality: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_successive_merges: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.paranoid_file_checks: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.force_consistency_checks: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.report_bg_io_stats: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                               Options.ttl: 2592000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                       Options.enable_blob_files: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                           Options.min_blob_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.blob_file_size: 268435456
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.blob_file_starting_level: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:           Options.merge_operator: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.sst_partitioner_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b8a1d0b680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b8a0f31350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.write_buffer_size: 16777216
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.max_write_buffer_number: 64
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.compression: LZ4
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.num_levels: 7
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.enabled: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.arena_block_size: 1048576
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.disable_auto_compactions: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.inplace_update_support: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                           Options.bloom_locality: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_successive_merges: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.paranoid_file_checks: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.force_consistency_checks: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.report_bg_io_stats: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                               Options.ttl: 2592000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                       Options.enable_blob_files: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                           Options.min_blob_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                          Options.blob_file_size: 268435456
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                Options.blob_file_starting_level: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:           Options.merge_operator: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.sst_partitioner_factory: None
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b8a1d0b680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b8a0f31350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.write_buffer_size: 16777216
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:  Options.max_write_buffer_number: 64
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:          Options.compression: LZ4
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:       Options.prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:             Options.num_levels: 7
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.level: 32767
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:               Options.compression_opts.strategy: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 15 10:36:39 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.enabled: false
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                        Options.arena_block_size: 1048576
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.disable_auto_compactions: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.inplace_update_support: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                           Options.bloom_locality: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_successive_merges: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.paranoid_file_checks: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.force_consistency_checks: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.report_bg_io_stats: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                               Options.ttl: 2592000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                       Options.enable_blob_files: false
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                           Options.min_blob_size: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                          Options.blob_file_size: 268435456
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.blob_file_starting_level: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:           Options.merge_operator: None
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter: None
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter_factory: None
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:  Options.sst_partitioner_factory: None
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b8a1d0b680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b8a0f31350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:        Options.write_buffer_size: 16777216
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:  Options.max_write_buffer_number: 64
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.compression: LZ4
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:       Options.prefix_extractor: nullptr
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:             Options.num_levels: 7
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.level: 32767
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.compression_opts.strategy: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.enabled: false
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                        Options.arena_block_size: 1048576
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.disable_auto_compactions: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.inplace_update_support: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                           Options.bloom_locality: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_successive_merges: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.paranoid_file_checks: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.force_consistency_checks: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.report_bg_io_stats: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                               Options.ttl: 2592000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                       Options.enable_blob_files: false
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                           Options.min_blob_size: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                          Options.blob_file_size: 268435456
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.blob_file_starting_level: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:           Options.merge_operator: None
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter: None
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter_factory: None
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:  Options.sst_partitioner_factory: None
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b8a1d0b680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b8a0f31350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:        Options.write_buffer_size: 16777216
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:  Options.max_write_buffer_number: 64
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.compression: LZ4
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:       Options.prefix_extractor: nullptr
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:             Options.num_levels: 7
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.level: 32767
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.compression_opts.strategy: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.enabled: false
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                        Options.arena_block_size: 1048576
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.disable_auto_compactions: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.inplace_update_support: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                           Options.bloom_locality: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_successive_merges: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.paranoid_file_checks: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.force_consistency_checks: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.report_bg_io_stats: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                               Options.ttl: 2592000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                       Options.enable_blob_files: false
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                           Options.min_blob_size: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                          Options.blob_file_size: 268435456
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.blob_file_starting_level: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:           Options.merge_operator: None
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter: None
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter_factory: None
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:  Options.sst_partitioner_factory: None
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b8a1d0b680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b8a0f31350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:        Options.write_buffer_size: 16777216
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:  Options.max_write_buffer_number: 64
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.compression: LZ4
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:       Options.prefix_extractor: nullptr
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:             Options.num_levels: 7
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.level: 32767
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.compression_opts.strategy: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.enabled: false
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                        Options.arena_block_size: 1048576
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.disable_auto_compactions: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.inplace_update_support: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                           Options.bloom_locality: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_successive_merges: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 15 10:36:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.paranoid_file_checks: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.force_consistency_checks: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.report_bg_io_stats: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                               Options.ttl: 2592000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                       Options.enable_blob_files: false
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                           Options.min_blob_size: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                          Options.blob_file_size: 268435456
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.blob_file_starting_level: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:           Options.merge_operator: None
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter: None
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter_factory: None
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:  Options.sst_partitioner_factory: None
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b8a1d0bac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b8a0f309b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:        Options.write_buffer_size: 16777216
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:  Options.max_write_buffer_number: 64
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.compression: LZ4
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:       Options.prefix_extractor: nullptr
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:             Options.num_levels: 7
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.level: 32767
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.compression_opts.strategy: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.enabled: false
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                        Options.arena_block_size: 1048576
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.disable_auto_compactions: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.inplace_update_support: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                           Options.bloom_locality: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_successive_merges: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.paranoid_file_checks: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.force_consistency_checks: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.report_bg_io_stats: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                               Options.ttl: 2592000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                       Options.enable_blob_files: false
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                           Options.min_blob_size: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                          Options.blob_file_size: 268435456
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.blob_file_starting_level: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:           Options.merge_operator: None
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter: None
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter_factory: None
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:  Options.sst_partitioner_factory: None
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b8a1d0bac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b8a0f309b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:        Options.write_buffer_size: 16777216
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:  Options.max_write_buffer_number: 64
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.compression: LZ4
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:       Options.prefix_extractor: nullptr
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:             Options.num_levels: 7
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.level: 32767
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.compression_opts.strategy: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.enabled: false
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                        Options.arena_block_size: 1048576
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.disable_auto_compactions: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.inplace_update_support: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                           Options.bloom_locality: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_successive_merges: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.paranoid_file_checks: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.force_consistency_checks: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.report_bg_io_stats: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                               Options.ttl: 2592000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                       Options.enable_blob_files: false
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                           Options.min_blob_size: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                          Options.blob_file_size: 268435456
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.blob_file_starting_level: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:           Options.merge_operator: None
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter: None
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:        Options.compaction_filter_factory: None
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:  Options.sst_partitioner_factory: None
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b8a1d0bac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b8a0f309b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:        Options.write_buffer_size: 16777216
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:  Options.max_write_buffer_number: 64
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.compression: LZ4
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:       Options.prefix_extractor: nullptr
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:             Options.num_levels: 7
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.level: 32767
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.compression_opts.strategy: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                  Options.compression_opts.enabled: false
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                        Options.arena_block_size: 1048576
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.disable_auto_compactions: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.inplace_update_support: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                           Options.bloom_locality: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                    Options.max_successive_merges: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.paranoid_file_checks: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.force_consistency_checks: 1
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.report_bg_io_stats: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                               Options.ttl: 2592000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                       Options.enable_blob_files: false
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                           Options.min_blob_size: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                          Options.blob_file_size: 268435456
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb:                Options.blob_file_starting_level: 0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c8ae229c-d677-423e-972d-1877a254a271
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795000000986, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 15 10:36:40 compute-0 sudo[83493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 15 10:36:40 compute-0 sudo[83493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:36:40 compute-0 sudo[83493]: pam_unix(sudo:session): session closed for user root
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795000186875, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765794999, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c8ae229c-d677-423e-972d-1877a254a271", "db_session_id": "YR341F293IL70ABQKXC1", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795000190206, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765795000, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c8ae229c-d677-423e-972d-1877a254a271", "db_session_id": "YR341F293IL70ABQKXC1", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795000239615, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765795000, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c8ae229c-d677-423e-972d-1877a254a271", "db_session_id": "YR341F293IL70ABQKXC1", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795000278038, "job": 1, "event": "recovery_finished"}
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 15 10:36:40 compute-0 sudo[83518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:36:40 compute-0 sudo[83518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:36:40 compute-0 sudo[83518]: pam_unix(sudo:session): session closed for user root
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b8a1f08000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: DB pointer 0x55b8a1eb2000
Dec 15 10:36:40 compute-0 ceph-osd[82838]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 15 10:36:40 compute-0 ceph-osd[82838]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Dec 15 10:36:40 compute-0 ceph-osd[82838]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 15 10:36:40 compute-0 ceph-osd[82838]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.19              0.00         1    0.186       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.19              0.00         1    0.186       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.19              0.00         1    0.186       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.19              0.00         1    0.186       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b8a0f31350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b8a0f31350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b8a0f31350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b8a0f31350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b8a0f31350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b8a0f31350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b8a0f31350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b8a0f309b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b8a0f309b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.049       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b8a0f309b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.038       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.038       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.038       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.038       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b8a0f31350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b8a0f31350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 15 10:36:40 compute-0 ceph-osd[82838]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 15 10:36:40 compute-0 ceph-osd[82838]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 15 10:36:40 compute-0 ceph-osd[82838]: _get_class not permitted to load lua
Dec 15 10:36:40 compute-0 ceph-osd[82838]: _get_class not permitted to load sdk
Dec 15 10:36:40 compute-0 ceph-osd[82838]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 15 10:36:40 compute-0 ceph-osd[82838]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 15 10:36:40 compute-0 ceph-osd[82838]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 15 10:36:40 compute-0 ceph-osd[82838]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 15 10:36:40 compute-0 ceph-osd[82838]: osd.0 0 load_pgs
Dec 15 10:36:40 compute-0 ceph-osd[82838]: osd.0 0 load_pgs opened 0 pgs
Dec 15 10:36:40 compute-0 ceph-osd[82838]: osd.0 0 log_to_monitors true
Dec 15 10:36:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0[82834]: 2025-12-15T10:36:40.334+0000 7fa899a2e740 -1 osd.0 0 log_to_monitors true
Dec 15 10:36:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Dec 15 10:36:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1140776426,v1:192.168.122.100:6803/1140776426]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec 15 10:36:40 compute-0 sudo[83543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 15 10:36:40 compute-0 sudo[83543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:36:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:36:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:40 compute-0 ceph-mon[74356]: pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:40 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:40 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:40 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:40 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:40 compute-0 ceph-mon[74356]: from='osd.0 [v2:192.168.122.100:6802/1140776426,v1:192.168.122.100:6803/1140776426]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec 15 10:36:40 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:40 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:40 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:36:40 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:36:40 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:36:40 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:36:40 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:36:40 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:36:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Dec 15 10:36:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/4075807867,v1:192.168.122.101:6801/4075807867]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec 15 10:36:40 compute-0 podman[83670]: 2025-12-15 10:36:40.918774729 +0000 UTC m=+0.076404073 container exec 79ed8dc51b1852fd831764c2817cfa3745ae4937bc9015bdb965e376ab3cc58d (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Dec 15 10:36:41 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Dec 15 10:36:41 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 15 10:36:41 compute-0 podman[83670]: 2025-12-15 10:36:41.040578371 +0000 UTC m=+0.198207755 container exec_died 79ed8dc51b1852fd831764c2817cfa3745ae4937bc9015bdb965e376ab3cc58d (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 15 10:36:41 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1140776426,v1:192.168.122.100:6803/1140776426]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 15 10:36:41 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/4075807867,v1:192.168.122.101:6801/4075807867]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 15 10:36:41 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Dec 15 10:36:41 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Dec 15 10:36:41 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec 15 10:36:41 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1140776426,v1:192.168.122.100:6803/1140776426]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec 15 10:36:41 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec 15 10:36:41 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Dec 15 10:36:41 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/4075807867,v1:192.168.122.101:6801/4075807867]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Dec 15 10:36:41 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-1,root=default}
Dec 15 10:36:41 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:36:41 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:36:41 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:36:41 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 15 10:36:41 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:41 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 15 10:36:41 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 15 10:36:41 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 15 10:36:41 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 15 10:36:41 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:41 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 15 10:36:41 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 15 10:36:41 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:41 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:36:41 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:41 compute-0 sudo[83543]: pam_unix(sudo:session): session closed for user root
Dec 15 10:36:41 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:36:41 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:41 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:36:41 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:41 compute-0 sudo[83755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:36:41 compute-0 sudo[83755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:36:41 compute-0 sudo[83755]: pam_unix(sudo:session): session closed for user root
Dec 15 10:36:41 compute-0 sudo[83780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 15 10:36:41 compute-0 sudo[83780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:36:42 compute-0 ceph-mon[74356]: pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:42 compute-0 ceph-mon[74356]: from='osd.1 [v2:192.168.122.101:6800/4075807867,v1:192.168.122.101:6801/4075807867]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec 15 10:36:42 compute-0 ceph-mon[74356]: from='osd.0 [v2:192.168.122.100:6802/1140776426,v1:192.168.122.100:6803/1140776426]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 15 10:36:42 compute-0 ceph-mon[74356]: from='osd.1 [v2:192.168.122.101:6800/4075807867,v1:192.168.122.101:6801/4075807867]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 15 10:36:42 compute-0 ceph-mon[74356]: osdmap e6: 2 total, 0 up, 2 in
Dec 15 10:36:42 compute-0 ceph-mon[74356]: from='osd.0 [v2:192.168.122.100:6802/1140776426,v1:192.168.122.100:6803/1140776426]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec 15 10:36:42 compute-0 ceph-mon[74356]: from='osd.1 [v2:192.168.122.101:6800/4075807867,v1:192.168.122.101:6801/4075807867]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Dec 15 10:36:42 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:42 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 15 10:36:42 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:42 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:42 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:42 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:42 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:42 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Dec 15 10:36:42 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 15 10:36:42 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1140776426,v1:192.168.122.100:6803/1140776426]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 15 10:36:42 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/4075807867,v1:192.168.122.101:6801/4075807867]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Dec 15 10:36:42 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Dec 15 10:36:42 compute-0 ceph-osd[82838]: osd.0 0 done with init, starting boot process
Dec 15 10:36:42 compute-0 ceph-osd[82838]: osd.0 0 start_boot
Dec 15 10:36:42 compute-0 ceph-osd[82838]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 15 10:36:42 compute-0 ceph-osd[82838]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 15 10:36:42 compute-0 ceph-osd[82838]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 15 10:36:42 compute-0 ceph-osd[82838]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 15 10:36:42 compute-0 ceph-osd[82838]: osd.0 0  bench count 12288000 bsize 4 KiB
Dec 15 10:36:42 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Dec 15 10:36:42 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 15 10:36:42 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:42 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 15 10:36:42 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 15 10:36:42 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 15 10:36:42 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 15 10:36:42 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1140776426; not ready for session (expect reconnect)
Dec 15 10:36:42 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 15 10:36:42 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:42 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 15 10:36:42 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/4075807867; not ready for session (expect reconnect)
Dec 15 10:36:42 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 15 10:36:42 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 15 10:36:42 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 15 10:36:42 compute-0 sudo[83780]: pam_unix(sudo:session): session closed for user root
Dec 15 10:36:42 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:36:42 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:42 compute-0 sudo[83836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:36:42 compute-0 sudo[83836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:36:42 compute-0 sudo[83836]: pam_unix(sudo:session): session closed for user root
Dec 15 10:36:42 compute-0 sudo[83861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 77365f67-614e-5a8d-b658-640395550c79 -- inventory --format=json-pretty --filter-for-batch
Dec 15 10:36:42 compute-0 sudo[83861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:36:42 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v46: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:42 compute-0 podman[83923]: 2025-12-15 10:36:42.863287578 +0000 UTC m=+0.087269152 container create d9c24d202b81eb34cf1941b497a963d5c582aacf4e1cd0177953d7192b2451fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_merkle, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:36:42 compute-0 podman[83923]: 2025-12-15 10:36:42.803579355 +0000 UTC m=+0.027560959 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:36:42 compute-0 systemd[1]: Started libpod-conmon-d9c24d202b81eb34cf1941b497a963d5c582aacf4e1cd0177953d7192b2451fe.scope.
Dec 15 10:36:42 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:36:43 compute-0 podman[83923]: 2025-12-15 10:36:43.041610995 +0000 UTC m=+0.265592599 container init d9c24d202b81eb34cf1941b497a963d5c582aacf4e1cd0177953d7192b2451fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_merkle, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:36:43 compute-0 podman[83923]: 2025-12-15 10:36:43.050481549 +0000 UTC m=+0.274463123 container start d9c24d202b81eb34cf1941b497a963d5c582aacf4e1cd0177953d7192b2451fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_merkle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:36:43 compute-0 amazing_merkle[83939]: 167 167
Dec 15 10:36:43 compute-0 systemd[1]: libpod-d9c24d202b81eb34cf1941b497a963d5c582aacf4e1cd0177953d7192b2451fe.scope: Deactivated successfully.
Dec 15 10:36:43 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1140776426; not ready for session (expect reconnect)
Dec 15 10:36:43 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/4075807867; not ready for session (expect reconnect)
Dec 15 10:36:43 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 15 10:36:43 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:43 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 15 10:36:43 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 15 10:36:43 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 15 10:36:43 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 15 10:36:43 compute-0 podman[83923]: 2025-12-15 10:36:43.098870531 +0000 UTC m=+0.322852145 container attach d9c24d202b81eb34cf1941b497a963d5c582aacf4e1cd0177953d7192b2451fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_merkle, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 15 10:36:43 compute-0 ceph-mon[74356]: from='osd.0 [v2:192.168.122.100:6802/1140776426,v1:192.168.122.100:6803/1140776426]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 15 10:36:43 compute-0 ceph-mon[74356]: from='osd.1 [v2:192.168.122.101:6800/4075807867,v1:192.168.122.101:6801/4075807867]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Dec 15 10:36:43 compute-0 ceph-mon[74356]: osdmap e7: 2 total, 0 up, 2 in
Dec 15 10:36:43 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:43 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 15 10:36:43 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:43 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 15 10:36:43 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:43 compute-0 podman[83923]: 2025-12-15 10:36:43.101506524 +0000 UTC m=+0.325488128 container died d9c24d202b81eb34cf1941b497a963d5c582aacf4e1cd0177953d7192b2451fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_merkle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:36:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-1027bb8399c363e2f2235f90dee26b79888ce22a7dd70ad296744369fd877a39-merged.mount: Deactivated successfully.
Dec 15 10:36:43 compute-0 podman[83923]: 2025-12-15 10:36:43.282086193 +0000 UTC m=+0.506067807 container remove d9c24d202b81eb34cf1941b497a963d5c582aacf4e1cd0177953d7192b2451fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_merkle, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:36:43 compute-0 systemd[1]: libpod-conmon-d9c24d202b81eb34cf1941b497a963d5c582aacf4e1cd0177953d7192b2451fe.scope: Deactivated successfully.
Dec 15 10:36:43 compute-0 podman[83964]: 2025-12-15 10:36:43.443628839 +0000 UTC m=+0.052651811 container create 6df6498b061e5dedf0f39fbc5539f8db045c712523a92678d66eb398d191f5f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_fermi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:36:43 compute-0 systemd[1]: Started libpod-conmon-6df6498b061e5dedf0f39fbc5539f8db045c712523a92678d66eb398d191f5f9.scope.
Dec 15 10:36:43 compute-0 podman[83964]: 2025-12-15 10:36:43.420715538 +0000 UTC m=+0.029738500 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:36:43 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:36:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c14d8a9f8477d43dddd300b7ce72da25c7dc151df3042bb0b52ba66f7af405e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c14d8a9f8477d43dddd300b7ce72da25c7dc151df3042bb0b52ba66f7af405e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c14d8a9f8477d43dddd300b7ce72da25c7dc151df3042bb0b52ba66f7af405e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c14d8a9f8477d43dddd300b7ce72da25c7dc151df3042bb0b52ba66f7af405e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:43 compute-0 podman[83964]: 2025-12-15 10:36:43.54252822 +0000 UTC m=+0.151551212 container init 6df6498b061e5dedf0f39fbc5539f8db045c712523a92678d66eb398d191f5f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_fermi, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 15 10:36:43 compute-0 podman[83964]: 2025-12-15 10:36:43.550418307 +0000 UTC m=+0.159441249 container start 6df6498b061e5dedf0f39fbc5539f8db045c712523a92678d66eb398d191f5f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_fermi, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:36:43 compute-0 podman[83964]: 2025-12-15 10:36:43.5661451 +0000 UTC m=+0.175168052 container attach 6df6498b061e5dedf0f39fbc5539f8db045c712523a92678d66eb398d191f5f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_fermi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec 15 10:36:43 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:36:43 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:43 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:36:43 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:43 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:36:43 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:43 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:36:43 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:43 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 15 10:36:43 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 15 10:36:43 compute-0 ceph-mgr[74651]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Dec 15 10:36:43 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Dec 15 10:36:43 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 15 10:36:43 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:44 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1140776426; not ready for session (expect reconnect)
Dec 15 10:36:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 15 10:36:44 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:44 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 15 10:36:44 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/4075807867; not ready for session (expect reconnect)
Dec 15 10:36:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 15 10:36:44 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 15 10:36:44 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 15 10:36:44 compute-0 ceph-mon[74356]: purged_snaps scrub starts
Dec 15 10:36:44 compute-0 ceph-mon[74356]: purged_snaps scrub ok
Dec 15 10:36:44 compute-0 ceph-mon[74356]: purged_snaps scrub starts
Dec 15 10:36:44 compute-0 ceph-mon[74356]: purged_snaps scrub ok
Dec 15 10:36:44 compute-0 ceph-mon[74356]: pgmap v46: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:44 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:44 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 15 10:36:44 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:44 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:44 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:44 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:44 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 15 10:36:44 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:44 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:44 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 15 10:36:44 compute-0 boring_fermi[83981]: [
Dec 15 10:36:44 compute-0 boring_fermi[83981]:     {
Dec 15 10:36:44 compute-0 boring_fermi[83981]:         "available": false,
Dec 15 10:36:44 compute-0 boring_fermi[83981]:         "being_replaced": false,
Dec 15 10:36:44 compute-0 boring_fermi[83981]:         "ceph_device_lvm": false,
Dec 15 10:36:44 compute-0 boring_fermi[83981]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 15 10:36:44 compute-0 boring_fermi[83981]:         "lsm_data": {},
Dec 15 10:36:44 compute-0 boring_fermi[83981]:         "lvs": [],
Dec 15 10:36:44 compute-0 boring_fermi[83981]:         "path": "/dev/sr0",
Dec 15 10:36:44 compute-0 boring_fermi[83981]:         "rejected_reasons": [
Dec 15 10:36:44 compute-0 boring_fermi[83981]:             "Has a FileSystem",
Dec 15 10:36:44 compute-0 boring_fermi[83981]:             "Insufficient space (<5GB)"
Dec 15 10:36:44 compute-0 boring_fermi[83981]:         ],
Dec 15 10:36:44 compute-0 boring_fermi[83981]:         "sys_api": {
Dec 15 10:36:44 compute-0 boring_fermi[83981]:             "actuators": null,
Dec 15 10:36:44 compute-0 boring_fermi[83981]:             "device_nodes": [
Dec 15 10:36:44 compute-0 boring_fermi[83981]:                 "sr0"
Dec 15 10:36:44 compute-0 boring_fermi[83981]:             ],
Dec 15 10:36:44 compute-0 boring_fermi[83981]:             "devname": "sr0",
Dec 15 10:36:44 compute-0 boring_fermi[83981]:             "human_readable_size": "482.00 KB",
Dec 15 10:36:44 compute-0 boring_fermi[83981]:             "id_bus": "ata",
Dec 15 10:36:44 compute-0 boring_fermi[83981]:             "model": "QEMU DVD-ROM",
Dec 15 10:36:44 compute-0 boring_fermi[83981]:             "nr_requests": "2",
Dec 15 10:36:44 compute-0 boring_fermi[83981]:             "parent": "/dev/sr0",
Dec 15 10:36:44 compute-0 boring_fermi[83981]:             "partitions": {},
Dec 15 10:36:44 compute-0 boring_fermi[83981]:             "path": "/dev/sr0",
Dec 15 10:36:44 compute-0 boring_fermi[83981]:             "removable": "1",
Dec 15 10:36:44 compute-0 boring_fermi[83981]:             "rev": "2.5+",
Dec 15 10:36:44 compute-0 boring_fermi[83981]:             "ro": "0",
Dec 15 10:36:44 compute-0 boring_fermi[83981]:             "rotational": "1",
Dec 15 10:36:44 compute-0 boring_fermi[83981]:             "sas_address": "",
Dec 15 10:36:44 compute-0 boring_fermi[83981]:             "sas_device_handle": "",
Dec 15 10:36:44 compute-0 boring_fermi[83981]:             "scheduler_mode": "mq-deadline",
Dec 15 10:36:44 compute-0 boring_fermi[83981]:             "sectors": 0,
Dec 15 10:36:44 compute-0 boring_fermi[83981]:             "sectorsize": "2048",
Dec 15 10:36:44 compute-0 boring_fermi[83981]:             "size": 493568.0,
Dec 15 10:36:44 compute-0 boring_fermi[83981]:             "support_discard": "2048",
Dec 15 10:36:44 compute-0 boring_fermi[83981]:             "type": "disk",
Dec 15 10:36:44 compute-0 boring_fermi[83981]:             "vendor": "QEMU"
Dec 15 10:36:44 compute-0 boring_fermi[83981]:         }
Dec 15 10:36:44 compute-0 boring_fermi[83981]:     }
Dec 15 10:36:44 compute-0 boring_fermi[83981]: ]
Dec 15 10:36:44 compute-0 systemd[1]: libpod-6df6498b061e5dedf0f39fbc5539f8db045c712523a92678d66eb398d191f5f9.scope: Deactivated successfully.
Dec 15 10:36:44 compute-0 podman[83964]: 2025-12-15 10:36:44.349670201 +0000 UTC m=+0.958693143 container died 6df6498b061e5dedf0f39fbc5539f8db045c712523a92678d66eb398d191f5f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_fermi, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:36:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-c14d8a9f8477d43dddd300b7ce72da25c7dc151df3042bb0b52ba66f7af405e5-merged.mount: Deactivated successfully.
Dec 15 10:36:44 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v47: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:44 compute-0 podman[83964]: 2025-12-15 10:36:44.937942779 +0000 UTC m=+1.546965721 container remove 6df6498b061e5dedf0f39fbc5539f8db045c712523a92678d66eb398d191f5f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 15 10:36:44 compute-0 sudo[83861]: pam_unix(sudo:session): session closed for user root
Dec 15 10:36:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:36:45 compute-0 systemd[1]: libpod-conmon-6df6498b061e5dedf0f39fbc5539f8db045c712523a92678d66eb398d191f5f9.scope: Deactivated successfully.
Dec 15 10:36:45 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1140776426; not ready for session (expect reconnect)
Dec 15 10:36:45 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/4075807867; not ready for session (expect reconnect)
Dec 15 10:36:45 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 15 10:36:45 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:45 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 15 10:36:45 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 15 10:36:45 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 15 10:36:45 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 15 10:36:45 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:45 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:36:45 compute-0 ceph-mon[74356]: Adjusting osd_memory_target on compute-1 to  5247M
Dec 15 10:36:45 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:45 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:36:45 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:45 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:36:45 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:45 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec 15 10:36:45 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 15 10:36:45 compute-0 ceph-mgr[74651]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Dec 15 10:36:45 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Dec 15 10:36:45 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 15 10:36:45 compute-0 ceph-mgr[74651]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Dec 15 10:36:45 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Dec 15 10:36:46 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1140776426; not ready for session (expect reconnect)
Dec 15 10:36:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 15 10:36:46 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:46 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 15 10:36:46 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/4075807867; not ready for session (expect reconnect)
Dec 15 10:36:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 15 10:36:46 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 15 10:36:46 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 15 10:36:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:36:46 compute-0 ceph-mon[74356]: pgmap v47: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:46 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:46 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 15 10:36:46 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:46 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:46 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:46 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:36:46 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 15 10:36:46 compute-0 ceph-mon[74356]: Adjusting osd_memory_target on compute-0 to 127.9M
Dec 15 10:36:46 compute-0 ceph-mon[74356]: Unable to set osd_memory_target on compute-0 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Dec 15 10:36:46 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:46 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 15 10:36:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Dec 15 10:36:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 15 10:36:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e8 e8: 2 total, 1 up, 2 in
Dec 15 10:36:46 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.101:6800/4075807867,v1:192.168.122.101:6801/4075807867] boot
Dec 15 10:36:46 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 1 up, 2 in
Dec 15 10:36:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 15 10:36:46 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 15 10:36:46 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 15 10:36:46 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 15 10:36:46 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v49: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:46 compute-0 ceph-mgr[74651]: [devicehealth INFO root] creating mgr pool
Dec 15 10:36:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Dec 15 10:36:46 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec 15 10:36:47 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1140776426; not ready for session (expect reconnect)
Dec 15 10:36:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 15 10:36:47 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:47 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 15 10:36:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Dec 15 10:36:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 15 10:36:47 compute-0 ceph-mon[74356]: OSD bench result of 7075.495828 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 15 10:36:47 compute-0 ceph-mon[74356]: osd.1 [v2:192.168.122.101:6800/4075807867,v1:192.168.122.101:6801/4075807867] boot
Dec 15 10:36:47 compute-0 ceph-mon[74356]: osdmap e8: 2 total, 1 up, 2 in
Dec 15 10:36:47 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:47 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 15 10:36:47 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec 15 10:36:47 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:47 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 15 10:36:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e9 e9: 2 total, 1 up, 2 in
Dec 15 10:36:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e9 crush map has features 3314933000852226048, adjusting msgr requires
Dec 15 10:36:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Dec 15 10:36:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Dec 15 10:36:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Dec 15 10:36:47 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 1 up, 2 in
Dec 15 10:36:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 15 10:36:47 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Dec 15 10:36:47 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 15 10:36:47 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec 15 10:36:47 compute-0 ceph-osd[82838]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 11.291 iops: 2890.376 elapsed_sec: 1.038
Dec 15 10:36:47 compute-0 ceph-osd[82838]: log_channel(cluster) log [WRN] : OSD bench result of 2890.376374 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 15 10:36:47 compute-0 ceph-osd[82838]: osd.0 0 waiting for initial osdmap
Dec 15 10:36:47 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0[82834]: 2025-12-15T10:36:47.760+0000 7fa8959b1640 -1 osd.0 0 waiting for initial osdmap
Dec 15 10:36:47 compute-0 ceph-osd[82838]: osd.0 9 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 15 10:36:47 compute-0 ceph-osd[82838]: osd.0 9 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec 15 10:36:47 compute-0 ceph-osd[82838]: osd.0 9 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 15 10:36:47 compute-0 ceph-osd[82838]: osd.0 9 check_osdmap_features require_osd_release unknown -> squid
Dec 15 10:36:47 compute-0 ceph-osd[82838]: osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 15 10:36:47 compute-0 ceph-osd[82838]: osd.0 9 set_numa_affinity not setting numa affinity
Dec 15 10:36:47 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-osd-0[82834]: 2025-12-15T10:36:47.785+0000 7fa890fd9640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 15 10:36:47 compute-0 ceph-osd[82838]: osd.0 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Dec 15 10:36:48 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1140776426; not ready for session (expect reconnect)
Dec 15 10:36:48 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 15 10:36:48 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:48 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 15 10:36:48 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Dec 15 10:36:48 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 15 10:36:48 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e10 e10: 2 total, 2 up, 2 in
Dec 15 10:36:48 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/1140776426,v1:192.168.122.100:6803/1140776426] boot
Dec 15 10:36:48 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 2 up, 2 in
Dec 15 10:36:48 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 15 10:36:48 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:48 compute-0 ceph-mon[74356]: pgmap v49: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 15 10:36:48 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 15 10:36:48 compute-0 ceph-mon[74356]: osdmap e9: 2 total, 1 up, 2 in
Dec 15 10:36:48 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:48 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec 15 10:36:48 compute-0 ceph-mon[74356]: OSD bench result of 2890.376374 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 15 10:36:48 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:48 compute-0 ceph-osd[82838]: osd.0 10 state: booting -> active
Dec 15 10:36:48 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 15 10:36:49 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Dec 15 10:36:49 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Dec 15 10:36:49 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 15 10:36:49 compute-0 ceph-mon[74356]: osd.0 [v2:192.168.122.100:6802/1140776426,v1:192.168.122.100:6803/1140776426] boot
Dec 15 10:36:49 compute-0 ceph-mon[74356]: osdmap e10: 2 total, 2 up, 2 in
Dec 15 10:36:49 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:36:49 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Dec 15 10:36:49 compute-0 ceph-mgr[74651]: [devicehealth INFO root] creating main.db for devicehealth
Dec 15 10:36:49 compute-0 ceph-mgr[74651]: [devicehealth INFO root] Check health
Dec 15 10:36:49 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 15 10:36:49 compute-0 sudo[85169]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Dec 15 10:36:49 compute-0 sudo[85169]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 15 10:36:49 compute-0 sudo[85169]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Dec 15 10:36:49 compute-0 sudo[85169]: pam_unix(sudo:session): session closed for user root
Dec 15 10:36:49 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 15 10:36:49 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 15 10:36:49 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 15 10:36:50 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Dec 15 10:36:50 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.difmqj(active, since 100s)
Dec 15 10:36:50 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Dec 15 10:36:50 compute-0 ceph-mon[74356]: pgmap v52: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 15 10:36:50 compute-0 ceph-mon[74356]: osdmap e11: 2 total, 2 up, 2 in
Dec 15 10:36:50 compute-0 ceph-mon[74356]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 15 10:36:50 compute-0 ceph-mon[74356]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 15 10:36:50 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 15 10:36:50 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Dec 15 10:36:50 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 15 10:36:50 compute-0 sudo[85195]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eywaunxdxaunmsoqfrlnguignbuhdswt ; /usr/bin/python3'
Dec 15 10:36:50 compute-0 sudo[85195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:36:51 compute-0 python3[85197]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:36:51 compute-0 podman[85199]: 2025-12-15 10:36:51.169668226 +0000 UTC m=+0.038224742 container create cc9aa1612cdd2f3c53399b4aed2df5c3e55a88a6300e7601996370c11567d3fd (image=quay.io/ceph/ceph:v19, name=recursing_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Dec 15 10:36:51 compute-0 systemd[1]: Started libpod-conmon-cc9aa1612cdd2f3c53399b4aed2df5c3e55a88a6300e7601996370c11567d3fd.scope.
Dec 15 10:36:51 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:36:51 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d04b29fe91f3081c235aeb1af8b5016ed03ffdebad60ad02939b9728d5108d0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d04b29fe91f3081c235aeb1af8b5016ed03ffdebad60ad02939b9728d5108d0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d04b29fe91f3081c235aeb1af8b5016ed03ffdebad60ad02939b9728d5108d0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:51 compute-0 podman[85199]: 2025-12-15 10:36:51.153874972 +0000 UTC m=+0.022431508 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:36:51 compute-0 podman[85199]: 2025-12-15 10:36:51.249837792 +0000 UTC m=+0.118394318 container init cc9aa1612cdd2f3c53399b4aed2df5c3e55a88a6300e7601996370c11567d3fd (image=quay.io/ceph/ceph:v19, name=recursing_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:36:51 compute-0 podman[85199]: 2025-12-15 10:36:51.25992292 +0000 UTC m=+0.128479436 container start cc9aa1612cdd2f3c53399b4aed2df5c3e55a88a6300e7601996370c11567d3fd (image=quay.io/ceph/ceph:v19, name=recursing_davinci, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:36:51 compute-0 podman[85199]: 2025-12-15 10:36:51.264884397 +0000 UTC m=+0.133440913 container attach cc9aa1612cdd2f3c53399b4aed2df5c3e55a88a6300e7601996370c11567d3fd (image=quay.io/ceph/ceph:v19, name=recursing_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:36:51 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 15 10:36:51 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4061800202' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 15 10:36:51 compute-0 recursing_davinci[85215]: 
Dec 15 10:36:51 compute-0 recursing_davinci[85215]: {"fsid":"77365f67-614e-5a8d-b658-640395550c79","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":120,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":12,"num_osds":2,"num_up_osds":2,"osd_up_since":1765795008,"num_in_osds":2,"osd_in_since":1765794987,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"unknown","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":446984192,"bytes_avail":21023657984,"bytes_total":21470642176,"unknown_pgs_ratio":1},"fsmap":{"epoch":1,"btime":"2025-12-15T10:34:49:065673+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-15T10:36:12.638692+0000","services":{}},"progress_events":{}}
Dec 15 10:36:51 compute-0 systemd[1]: libpod-cc9aa1612cdd2f3c53399b4aed2df5c3e55a88a6300e7601996370c11567d3fd.scope: Deactivated successfully.
Dec 15 10:36:51 compute-0 podman[85199]: 2025-12-15 10:36:51.71852724 +0000 UTC m=+0.587083796 container died cc9aa1612cdd2f3c53399b4aed2df5c3e55a88a6300e7601996370c11567d3fd (image=quay.io/ceph/ceph:v19, name=recursing_davinci, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 15 10:36:51 compute-0 ceph-mon[74356]: mgrmap e9: compute-0.difmqj(active, since 100s)
Dec 15 10:36:51 compute-0 ceph-mon[74356]: osdmap e12: 2 total, 2 up, 2 in
Dec 15 10:36:51 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/4061800202' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 15 10:36:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d04b29fe91f3081c235aeb1af8b5016ed03ffdebad60ad02939b9728d5108d0-merged.mount: Deactivated successfully.
Dec 15 10:36:51 compute-0 podman[85199]: 2025-12-15 10:36:51.908907909 +0000 UTC m=+0.777464425 container remove cc9aa1612cdd2f3c53399b4aed2df5c3e55a88a6300e7601996370c11567d3fd (image=quay.io/ceph/ceph:v19, name=recursing_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 15 10:36:51 compute-0 systemd[1]: libpod-conmon-cc9aa1612cdd2f3c53399b4aed2df5c3e55a88a6300e7601996370c11567d3fd.scope: Deactivated successfully.
Dec 15 10:36:51 compute-0 sudo[85195]: pam_unix(sudo:session): session closed for user root
Dec 15 10:36:52 compute-0 sudo[85277]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cguaezeuoaucqpbhzeubkhrecbjvpydo ; /usr/bin/python3'
Dec 15 10:36:52 compute-0 sudo[85277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:36:52 compute-0 python3[85279]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:36:52 compute-0 podman[85280]: 2025-12-15 10:36:52.457419253 +0000 UTC m=+0.041992336 container create 828bc7cd03ceaeffba4c41b5c44bf0e870d693fe88bd87038bbc75fd3bbcc07b (image=quay.io/ceph/ceph:v19, name=amazing_northcutt, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:36:52 compute-0 systemd[1]: Started libpod-conmon-828bc7cd03ceaeffba4c41b5c44bf0e870d693fe88bd87038bbc75fd3bbcc07b.scope.
Dec 15 10:36:52 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:36:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/164f2d1df92ecf4e75cc74e845b3512acdbd2023c020be7e6defe5e100801c98/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/164f2d1df92ecf4e75cc74e845b3512acdbd2023c020be7e6defe5e100801c98/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:52 compute-0 podman[85280]: 2025-12-15 10:36:52.525803444 +0000 UTC m=+0.110376567 container init 828bc7cd03ceaeffba4c41b5c44bf0e870d693fe88bd87038bbc75fd3bbcc07b (image=quay.io/ceph/ceph:v19, name=amazing_northcutt, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:36:52 compute-0 podman[85280]: 2025-12-15 10:36:52.530988678 +0000 UTC m=+0.115561771 container start 828bc7cd03ceaeffba4c41b5c44bf0e870d693fe88bd87038bbc75fd3bbcc07b (image=quay.io/ceph/ceph:v19, name=amazing_northcutt, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 15 10:36:52 compute-0 podman[85280]: 2025-12-15 10:36:52.437413523 +0000 UTC m=+0.021986636 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:36:52 compute-0 podman[85280]: 2025-12-15 10:36:52.53396512 +0000 UTC m=+0.118538213 container attach 828bc7cd03ceaeffba4c41b5c44bf0e870d693fe88bd87038bbc75fd3bbcc07b (image=quay.io/ceph/ceph:v19, name=amazing_northcutt, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 15 10:36:52 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:36:52 compute-0 ceph-mon[74356]: pgmap v55: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 15 10:36:52 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 15 10:36:52 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3991094930' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 15 10:36:53 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Dec 15 10:36:53 compute-0 ceph-mon[74356]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:36:53 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/3991094930' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 15 10:36:53 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3991094930' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 15 10:36:53 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Dec 15 10:36:53 compute-0 amazing_northcutt[85295]: pool 'vms' created
Dec 15 10:36:53 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Dec 15 10:36:53 compute-0 systemd[1]: libpod-828bc7cd03ceaeffba4c41b5c44bf0e870d693fe88bd87038bbc75fd3bbcc07b.scope: Deactivated successfully.
Dec 15 10:36:53 compute-0 podman[85280]: 2025-12-15 10:36:53.885897433 +0000 UTC m=+1.470470526 container died 828bc7cd03ceaeffba4c41b5c44bf0e870d693fe88bd87038bbc75fd3bbcc07b (image=quay.io/ceph/ceph:v19, name=amazing_northcutt, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 15 10:36:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-164f2d1df92ecf4e75cc74e845b3512acdbd2023c020be7e6defe5e100801c98-merged.mount: Deactivated successfully.
Dec 15 10:36:53 compute-0 podman[85280]: 2025-12-15 10:36:53.924877445 +0000 UTC m=+1.509450538 container remove 828bc7cd03ceaeffba4c41b5c44bf0e870d693fe88bd87038bbc75fd3bbcc07b (image=quay.io/ceph/ceph:v19, name=amazing_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 15 10:36:53 compute-0 systemd[1]: libpod-conmon-828bc7cd03ceaeffba4c41b5c44bf0e870d693fe88bd87038bbc75fd3bbcc07b.scope: Deactivated successfully.
Dec 15 10:36:53 compute-0 sudo[85277]: pam_unix(sudo:session): session closed for user root
Dec 15 10:36:54 compute-0 sudo[85355]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkavznprtdquaqszqesiotlmsyfbjbmy ; /usr/bin/python3'
Dec 15 10:36:54 compute-0 sudo[85355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:36:54 compute-0 python3[85357]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:36:54 compute-0 podman[85358]: 2025-12-15 10:36:54.279955416 +0000 UTC m=+0.045469452 container create 1a3d9a17a7eae20daedaafdf77dd87f9abe225e70f84b73d85d241e5c7b01cf2 (image=quay.io/ceph/ceph:v19, name=cranky_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:36:54 compute-0 systemd[1]: Started libpod-conmon-1a3d9a17a7eae20daedaafdf77dd87f9abe225e70f84b73d85d241e5c7b01cf2.scope.
Dec 15 10:36:54 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12670c8c90139ddeed13393db1cf665ed3960d2ba6be6eb14ccf204948c9bbaa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12670c8c90139ddeed13393db1cf665ed3960d2ba6be6eb14ccf204948c9bbaa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:54 compute-0 podman[85358]: 2025-12-15 10:36:54.347635439 +0000 UTC m=+0.113149485 container init 1a3d9a17a7eae20daedaafdf77dd87f9abe225e70f84b73d85d241e5c7b01cf2 (image=quay.io/ceph/ceph:v19, name=cranky_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 15 10:36:54 compute-0 podman[85358]: 2025-12-15 10:36:54.254519156 +0000 UTC m=+0.020033192 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:36:54 compute-0 podman[85358]: 2025-12-15 10:36:54.355950638 +0000 UTC m=+0.121464674 container start 1a3d9a17a7eae20daedaafdf77dd87f9abe225e70f84b73d85d241e5c7b01cf2 (image=quay.io/ceph/ceph:v19, name=cranky_lovelace, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 15 10:36:54 compute-0 podman[85358]: 2025-12-15 10:36:54.359471505 +0000 UTC m=+0.124985541 container attach 1a3d9a17a7eae20daedaafdf77dd87f9abe225e70f84b73d85d241e5c7b01cf2 (image=quay.io/ceph/ceph:v19, name=cranky_lovelace, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:36:54 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 15 10:36:54 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3945696344' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 15 10:36:54 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v58: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:36:54 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Dec 15 10:36:54 compute-0 ceph-mon[74356]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 15 10:36:54 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3945696344' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 15 10:36:54 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Dec 15 10:36:54 compute-0 cranky_lovelace[85373]: pool 'volumes' created
Dec 15 10:36:54 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Dec 15 10:36:54 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 14 pg[3.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [0] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:36:54 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/3991094930' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 15 10:36:54 compute-0 ceph-mon[74356]: osdmap e13: 2 total, 2 up, 2 in
Dec 15 10:36:54 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/3945696344' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 15 10:36:54 compute-0 systemd[1]: libpod-1a3d9a17a7eae20daedaafdf77dd87f9abe225e70f84b73d85d241e5c7b01cf2.scope: Deactivated successfully.
Dec 15 10:36:54 compute-0 conmon[85373]: conmon 1a3d9a17a7eae20daeda <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1a3d9a17a7eae20daedaafdf77dd87f9abe225e70f84b73d85d241e5c7b01cf2.scope/container/memory.events
Dec 15 10:36:54 compute-0 podman[85358]: 2025-12-15 10:36:54.895625509 +0000 UTC m=+0.661139525 container died 1a3d9a17a7eae20daedaafdf77dd87f9abe225e70f84b73d85d241e5c7b01cf2 (image=quay.io/ceph/ceph:v19, name=cranky_lovelace, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 15 10:36:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-12670c8c90139ddeed13393db1cf665ed3960d2ba6be6eb14ccf204948c9bbaa-merged.mount: Deactivated successfully.
Dec 15 10:36:54 compute-0 podman[85358]: 2025-12-15 10:36:54.933720746 +0000 UTC m=+0.699234772 container remove 1a3d9a17a7eae20daedaafdf77dd87f9abe225e70f84b73d85d241e5c7b01cf2 (image=quay.io/ceph/ceph:v19, name=cranky_lovelace, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 15 10:36:54 compute-0 systemd[1]: libpod-conmon-1a3d9a17a7eae20daedaafdf77dd87f9abe225e70f84b73d85d241e5c7b01cf2.scope: Deactivated successfully.
Dec 15 10:36:54 compute-0 sudo[85355]: pam_unix(sudo:session): session closed for user root
Dec 15 10:36:55 compute-0 sudo[85436]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojoocsthokpzjxtwmcjdlzzdzzqzfkvx ; /usr/bin/python3'
Dec 15 10:36:55 compute-0 sudo[85436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:36:55 compute-0 python3[85438]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:36:55 compute-0 podman[85439]: 2025-12-15 10:36:55.291980245 +0000 UTC m=+0.047434266 container create fe5da3e2d7b942347b313602cc8c1d00379953cb0fb9d181f3a7863f1128f4d1 (image=quay.io/ceph/ceph:v19, name=unruffled_heisenberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:36:55 compute-0 systemd[1]: Started libpod-conmon-fe5da3e2d7b942347b313602cc8c1d00379953cb0fb9d181f3a7863f1128f4d1.scope.
Dec 15 10:36:55 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c0427d697c3fba489116fccc85d0c698f1bd2ca338397cce403a0d3f63b8398/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c0427d697c3fba489116fccc85d0c698f1bd2ca338397cce403a0d3f63b8398/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:55 compute-0 podman[85439]: 2025-12-15 10:36:55.270497384 +0000 UTC m=+0.025951425 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:36:55 compute-0 podman[85439]: 2025-12-15 10:36:55.373574161 +0000 UTC m=+0.129028212 container init fe5da3e2d7b942347b313602cc8c1d00379953cb0fb9d181f3a7863f1128f4d1 (image=quay.io/ceph/ceph:v19, name=unruffled_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:36:55 compute-0 podman[85439]: 2025-12-15 10:36:55.380000618 +0000 UTC m=+0.135454639 container start fe5da3e2d7b942347b313602cc8c1d00379953cb0fb9d181f3a7863f1128f4d1 (image=quay.io/ceph/ceph:v19, name=unruffled_heisenberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:36:55 compute-0 podman[85439]: 2025-12-15 10:36:55.384649185 +0000 UTC m=+0.140103226 container attach fe5da3e2d7b942347b313602cc8c1d00379953cb0fb9d181f3a7863f1128f4d1 (image=quay.io/ceph/ceph:v19, name=unruffled_heisenberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:36:55 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 15 10:36:55 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/196341492' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 15 10:36:55 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Dec 15 10:36:55 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/196341492' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 15 10:36:55 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Dec 15 10:36:55 compute-0 unruffled_heisenberg[85454]: pool 'backups' created
Dec 15 10:36:55 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Dec 15 10:36:55 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 15 pg[4.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [0] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:36:55 compute-0 ceph-mon[74356]: pgmap v58: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:36:55 compute-0 ceph-mon[74356]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 15 10:36:55 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/3945696344' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 15 10:36:55 compute-0 ceph-mon[74356]: osdmap e14: 2 total, 2 up, 2 in
Dec 15 10:36:55 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/196341492' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 15 10:36:55 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/196341492' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 15 10:36:55 compute-0 ceph-mon[74356]: osdmap e15: 2 total, 2 up, 2 in
Dec 15 10:36:55 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 15 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [0] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:36:55 compute-0 systemd[1]: libpod-fe5da3e2d7b942347b313602cc8c1d00379953cb0fb9d181f3a7863f1128f4d1.scope: Deactivated successfully.
Dec 15 10:36:55 compute-0 podman[85439]: 2025-12-15 10:36:55.896884451 +0000 UTC m=+0.652338482 container died fe5da3e2d7b942347b313602cc8c1d00379953cb0fb9d181f3a7863f1128f4d1 (image=quay.io/ceph/ceph:v19, name=unruffled_heisenberg, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 15 10:36:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c0427d697c3fba489116fccc85d0c698f1bd2ca338397cce403a0d3f63b8398-merged.mount: Deactivated successfully.
Dec 15 10:36:55 compute-0 podman[85439]: 2025-12-15 10:36:55.939587417 +0000 UTC m=+0.695041428 container remove fe5da3e2d7b942347b313602cc8c1d00379953cb0fb9d181f3a7863f1128f4d1 (image=quay.io/ceph/ceph:v19, name=unruffled_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:36:55 compute-0 systemd[1]: libpod-conmon-fe5da3e2d7b942347b313602cc8c1d00379953cb0fb9d181f3a7863f1128f4d1.scope: Deactivated successfully.
Dec 15 10:36:55 compute-0 sudo[85436]: pam_unix(sudo:session): session closed for user root
Dec 15 10:36:56 compute-0 sudo[85516]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwesavqtxtbfpfdnubnkpxcyaqvefans ; /usr/bin/python3'
Dec 15 10:36:56 compute-0 sudo[85516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:36:56 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:36:56 compute-0 python3[85518]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:36:56 compute-0 podman[85519]: 2025-12-15 10:36:56.320366385 +0000 UTC m=+0.041215385 container create 22d3e93042cb905c09397e85ec7fa8b4098aee62dd1971adefdac8bd91cf3174 (image=quay.io/ceph/ceph:v19, name=upbeat_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True)
Dec 15 10:36:56 compute-0 systemd[1]: Started libpod-conmon-22d3e93042cb905c09397e85ec7fa8b4098aee62dd1971adefdac8bd91cf3174.scope.
Dec 15 10:36:56 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:36:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9e904afa4a7e1259db7b95eaaf2d6473d8fcd295e4cb88f9da121fdeb5d817/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9e904afa4a7e1259db7b95eaaf2d6473d8fcd295e4cb88f9da121fdeb5d817/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:56 compute-0 podman[85519]: 2025-12-15 10:36:56.300518899 +0000 UTC m=+0.021367919 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:36:56 compute-0 podman[85519]: 2025-12-15 10:36:56.399991057 +0000 UTC m=+0.120840067 container init 22d3e93042cb905c09397e85ec7fa8b4098aee62dd1971adefdac8bd91cf3174 (image=quay.io/ceph/ceph:v19, name=upbeat_engelbart, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 15 10:36:56 compute-0 podman[85519]: 2025-12-15 10:36:56.409696703 +0000 UTC m=+0.130545693 container start 22d3e93042cb905c09397e85ec7fa8b4098aee62dd1971adefdac8bd91cf3174 (image=quay.io/ceph/ceph:v19, name=upbeat_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 15 10:36:56 compute-0 podman[85519]: 2025-12-15 10:36:56.413132508 +0000 UTC m=+0.133981528 container attach 22d3e93042cb905c09397e85ec7fa8b4098aee62dd1971adefdac8bd91cf3174 (image=quay.io/ceph/ceph:v19, name=upbeat_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec 15 10:36:56 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 15 10:36:56 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/195165527' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 15 10:36:56 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v61: 4 pgs: 3 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:36:56 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Dec 15 10:36:56 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/195165527' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 15 10:36:56 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Dec 15 10:36:56 compute-0 upbeat_engelbart[85534]: pool 'images' created
Dec 15 10:36:56 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Dec 15 10:36:56 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 16 pg[5.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:36:56 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/195165527' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 15 10:36:56 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/195165527' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 15 10:36:56 compute-0 ceph-mon[74356]: osdmap e16: 2 total, 2 up, 2 in
Dec 15 10:36:56 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 16 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [0] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:36:56 compute-0 systemd[1]: libpod-22d3e93042cb905c09397e85ec7fa8b4098aee62dd1971adefdac8bd91cf3174.scope: Deactivated successfully.
Dec 15 10:36:56 compute-0 podman[85519]: 2025-12-15 10:36:56.909913768 +0000 UTC m=+0.630762758 container died 22d3e93042cb905c09397e85ec7fa8b4098aee62dd1971adefdac8bd91cf3174 (image=quay.io/ceph/ceph:v19, name=upbeat_engelbart, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 15 10:36:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a9e904afa4a7e1259db7b95eaaf2d6473d8fcd295e4cb88f9da121fdeb5d817-merged.mount: Deactivated successfully.
Dec 15 10:36:56 compute-0 podman[85519]: 2025-12-15 10:36:56.94958213 +0000 UTC m=+0.670431120 container remove 22d3e93042cb905c09397e85ec7fa8b4098aee62dd1971adefdac8bd91cf3174 (image=quay.io/ceph/ceph:v19, name=upbeat_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 15 10:36:56 compute-0 systemd[1]: libpod-conmon-22d3e93042cb905c09397e85ec7fa8b4098aee62dd1971adefdac8bd91cf3174.scope: Deactivated successfully.
Dec 15 10:36:56 compute-0 sudo[85516]: pam_unix(sudo:session): session closed for user root
Dec 15 10:36:57 compute-0 sudo[85595]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skvjlegoxtcwactjdksemvpnnsvulhmw ; /usr/bin/python3'
Dec 15 10:36:57 compute-0 sudo[85595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:36:57 compute-0 python3[85597]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:36:57 compute-0 podman[85598]: 2025-12-15 10:36:57.309984268 +0000 UTC m=+0.070347817 container create 4bb9039706a172572ce6f83b48fb0dd117c48ead1ebbd2ba6b6ebb763074d07f (image=quay.io/ceph/ceph:v19, name=hardcore_hypatia, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:36:57 compute-0 systemd[1]: Started libpod-conmon-4bb9039706a172572ce6f83b48fb0dd117c48ead1ebbd2ba6b6ebb763074d07f.scope.
Dec 15 10:36:57 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8a3e5c557b753d1af653fefdf2242c7a3f66c61848ef6b4eb0b162a2232c1d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8a3e5c557b753d1af653fefdf2242c7a3f66c61848ef6b4eb0b162a2232c1d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:57 compute-0 podman[85598]: 2025-12-15 10:36:57.377052374 +0000 UTC m=+0.137415953 container init 4bb9039706a172572ce6f83b48fb0dd117c48ead1ebbd2ba6b6ebb763074d07f (image=quay.io/ceph/ceph:v19, name=hardcore_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:36:57 compute-0 podman[85598]: 2025-12-15 10:36:57.288091664 +0000 UTC m=+0.048455223 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:36:57 compute-0 podman[85598]: 2025-12-15 10:36:57.38313568 +0000 UTC m=+0.143499229 container start 4bb9039706a172572ce6f83b48fb0dd117c48ead1ebbd2ba6b6ebb763074d07f (image=quay.io/ceph/ceph:v19, name=hardcore_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 15 10:36:57 compute-0 podman[85598]: 2025-12-15 10:36:57.386157623 +0000 UTC m=+0.146521192 container attach 4bb9039706a172572ce6f83b48fb0dd117c48ead1ebbd2ba6b6ebb763074d07f (image=quay.io/ceph/ceph:v19, name=hardcore_hypatia, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 15 10:36:57 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 15 10:36:57 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3438837194' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 15 10:36:57 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Dec 15 10:36:57 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3438837194' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 15 10:36:57 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Dec 15 10:36:57 compute-0 hardcore_hypatia[85613]: pool 'cephfs.cephfs.meta' created
Dec 15 10:36:57 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Dec 15 10:36:57 compute-0 ceph-mon[74356]: pgmap v61: 4 pgs: 3 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:36:57 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/3438837194' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 15 10:36:57 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 17 pg[6.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:36:57 compute-0 systemd[1]: libpod-4bb9039706a172572ce6f83b48fb0dd117c48ead1ebbd2ba6b6ebb763074d07f.scope: Deactivated successfully.
Dec 15 10:36:57 compute-0 podman[85598]: 2025-12-15 10:36:57.981910268 +0000 UTC m=+0.742273837 container died 4bb9039706a172572ce6f83b48fb0dd117c48ead1ebbd2ba6b6ebb763074d07f (image=quay.io/ceph/ceph:v19, name=hardcore_hypatia, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:36:57 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 17 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:36:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d8a3e5c557b753d1af653fefdf2242c7a3f66c61848ef6b4eb0b162a2232c1d-merged.mount: Deactivated successfully.
Dec 15 10:36:58 compute-0 podman[85598]: 2025-12-15 10:36:58.049104408 +0000 UTC m=+0.809467957 container remove 4bb9039706a172572ce6f83b48fb0dd117c48ead1ebbd2ba6b6ebb763074d07f (image=quay.io/ceph/ceph:v19, name=hardcore_hypatia, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 15 10:36:58 compute-0 sudo[85595]: pam_unix(sudo:session): session closed for user root
Dec 15 10:36:58 compute-0 systemd[1]: libpod-conmon-4bb9039706a172572ce6f83b48fb0dd117c48ead1ebbd2ba6b6ebb763074d07f.scope: Deactivated successfully.
Dec 15 10:36:58 compute-0 sudo[85675]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgokydsfglxygnczbryfbyxlmmtsllet ; /usr/bin/python3'
Dec 15 10:36:58 compute-0 sudo[85675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:36:58 compute-0 python3[85677]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:36:58 compute-0 podman[85678]: 2025-12-15 10:36:58.404417935 +0000 UTC m=+0.046123861 container create 205e5b567f2904bba584e7ada948a94a63f2289211500b1922aca011cbfb821a (image=quay.io/ceph/ceph:v19, name=infallible_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 15 10:36:58 compute-0 systemd[1]: Started libpod-conmon-205e5b567f2904bba584e7ada948a94a63f2289211500b1922aca011cbfb821a.scope.
Dec 15 10:36:58 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:36:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/659bca1862f1c145a656acfff844cf03f022538c60d00539651aa3fdf3aeb36a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/659bca1862f1c145a656acfff844cf03f022538c60d00539651aa3fdf3aeb36a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:58 compute-0 podman[85678]: 2025-12-15 10:36:58.384293371 +0000 UTC m=+0.025999317 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:36:58 compute-0 podman[85678]: 2025-12-15 10:36:58.487361257 +0000 UTC m=+0.129067203 container init 205e5b567f2904bba584e7ada948a94a63f2289211500b1922aca011cbfb821a (image=quay.io/ceph/ceph:v19, name=infallible_dirac, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 15 10:36:58 compute-0 podman[85678]: 2025-12-15 10:36:58.494330529 +0000 UTC m=+0.136036465 container start 205e5b567f2904bba584e7ada948a94a63f2289211500b1922aca011cbfb821a (image=quay.io/ceph/ceph:v19, name=infallible_dirac, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 15 10:36:58 compute-0 podman[85678]: 2025-12-15 10:36:58.498535765 +0000 UTC m=+0.140241691 container attach 205e5b567f2904bba584e7ada948a94a63f2289211500b1922aca011cbfb821a (image=quay.io/ceph/ceph:v19, name=infallible_dirac, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:36:58 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v64: 6 pgs: 2 active+clean, 4 unknown; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:36:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 15 10:36:58 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/155191478' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 15 10:36:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Dec 15 10:36:58 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/155191478' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 15 10:36:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Dec 15 10:36:58 compute-0 infallible_dirac[85693]: pool 'cephfs.cephfs.data' created
Dec 15 10:36:58 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Dec 15 10:36:58 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/3438837194' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 15 10:36:58 compute-0 ceph-mon[74356]: osdmap e17: 2 total, 2 up, 2 in
Dec 15 10:36:58 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/155191478' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 15 10:36:58 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 18 pg[6.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:36:59 compute-0 systemd[1]: libpod-205e5b567f2904bba584e7ada948a94a63f2289211500b1922aca011cbfb821a.scope: Deactivated successfully.
Dec 15 10:36:59 compute-0 podman[85678]: 2025-12-15 10:36:59.012432396 +0000 UTC m=+0.654138322 container died 205e5b567f2904bba584e7ada948a94a63f2289211500b1922aca011cbfb821a (image=quay.io/ceph/ceph:v19, name=infallible_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:36:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-659bca1862f1c145a656acfff844cf03f022538c60d00539651aa3fdf3aeb36a-merged.mount: Deactivated successfully.
Dec 15 10:36:59 compute-0 podman[85678]: 2025-12-15 10:36:59.047383638 +0000 UTC m=+0.689089564 container remove 205e5b567f2904bba584e7ada948a94a63f2289211500b1922aca011cbfb821a (image=quay.io/ceph/ceph:v19, name=infallible_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 15 10:36:59 compute-0 systemd[1]: libpod-conmon-205e5b567f2904bba584e7ada948a94a63f2289211500b1922aca011cbfb821a.scope: Deactivated successfully.
Dec 15 10:36:59 compute-0 sudo[85675]: pam_unix(sudo:session): session closed for user root
Dec 15 10:36:59 compute-0 sudo[85755]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwdsxeevirssdzswjjzeczkrehhcnwmm ; /usr/bin/python3'
Dec 15 10:36:59 compute-0 sudo[85755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:36:59 compute-0 python3[85757]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:36:59 compute-0 podman[85758]: 2025-12-15 10:36:59.447029335 +0000 UTC m=+0.045985315 container create d9e7a51d4c62d054871bcb7a8a47b0a1e82e56daf6ad22a1a1cafbae4980a405 (image=quay.io/ceph/ceph:v19, name=unruffled_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 15 10:36:59 compute-0 systemd[1]: Started libpod-conmon-d9e7a51d4c62d054871bcb7a8a47b0a1e82e56daf6ad22a1a1cafbae4980a405.scope.
Dec 15 10:36:59 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93a4126f4dd5d1d1b78e3eda365d8b791ddb2d4bddff645e147f7369240baf34/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93a4126f4dd5d1d1b78e3eda365d8b791ddb2d4bddff645e147f7369240baf34/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:36:59 compute-0 podman[85758]: 2025-12-15 10:36:59.51587157 +0000 UTC m=+0.114827570 container init d9e7a51d4c62d054871bcb7a8a47b0a1e82e56daf6ad22a1a1cafbae4980a405 (image=quay.io/ceph/ceph:v19, name=unruffled_rubin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:36:59 compute-0 podman[85758]: 2025-12-15 10:36:59.425875534 +0000 UTC m=+0.024831534 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:36:59 compute-0 podman[85758]: 2025-12-15 10:36:59.522169633 +0000 UTC m=+0.121125603 container start d9e7a51d4c62d054871bcb7a8a47b0a1e82e56daf6ad22a1a1cafbae4980a405 (image=quay.io/ceph/ceph:v19, name=unruffled_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 15 10:36:59 compute-0 podman[85758]: 2025-12-15 10:36:59.526975476 +0000 UTC m=+0.125931446 container attach d9e7a51d4c62d054871bcb7a8a47b0a1e82e56daf6ad22a1a1cafbae4980a405 (image=quay.io/ceph/ceph:v19, name=unruffled_rubin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 15 10:36:59 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Dec 15 10:36:59 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3427179052' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec 15 10:36:59 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Dec 15 10:37:00 compute-0 ceph-mon[74356]: pgmap v64: 6 pgs: 2 active+clean, 4 unknown; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:00 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/155191478' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 15 10:37:00 compute-0 ceph-mon[74356]: osdmap e18: 2 total, 2 up, 2 in
Dec 15 10:37:00 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/3427179052' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec 15 10:37:00 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3427179052' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 15 10:37:00 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e19 e19: 2 total, 2 up, 2 in
Dec 15 10:37:00 compute-0 unruffled_rubin[85774]: enabled application 'rbd' on pool 'vms'
Dec 15 10:37:00 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Dec 15 10:37:00 compute-0 systemd[1]: libpod-d9e7a51d4c62d054871bcb7a8a47b0a1e82e56daf6ad22a1a1cafbae4980a405.scope: Deactivated successfully.
Dec 15 10:37:00 compute-0 conmon[85774]: conmon d9e7a51d4c62d054871b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d9e7a51d4c62d054871bcb7a8a47b0a1e82e56daf6ad22a1a1cafbae4980a405.scope/container/memory.events
Dec 15 10:37:00 compute-0 podman[85758]: 2025-12-15 10:37:00.031656503 +0000 UTC m=+0.630612463 container died d9e7a51d4c62d054871bcb7a8a47b0a1e82e56daf6ad22a1a1cafbae4980a405 (image=quay.io/ceph/ceph:v19, name=unruffled_rubin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:37:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-93a4126f4dd5d1d1b78e3eda365d8b791ddb2d4bddff645e147f7369240baf34-merged.mount: Deactivated successfully.
Dec 15 10:37:00 compute-0 podman[85758]: 2025-12-15 10:37:00.078658327 +0000 UTC m=+0.677614297 container remove d9e7a51d4c62d054871bcb7a8a47b0a1e82e56daf6ad22a1a1cafbae4980a405 (image=quay.io/ceph/ceph:v19, name=unruffled_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 15 10:37:00 compute-0 systemd[1]: libpod-conmon-d9e7a51d4c62d054871bcb7a8a47b0a1e82e56daf6ad22a1a1cafbae4980a405.scope: Deactivated successfully.
Dec 15 10:37:00 compute-0 sudo[85755]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:00 compute-0 sudo[85833]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omfgjsqqgdtzoyxelrkzqiiwiznrgvhd ; /usr/bin/python3'
Dec 15 10:37:00 compute-0 sudo[85833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:00 compute-0 python3[85835]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:37:00 compute-0 podman[85836]: 2025-12-15 10:37:00.433973805 +0000 UTC m=+0.044365202 container create e3019ad1065fe7723ae0c4b22e90cea3abd511e527be0cf25d88357a1a2ef6a4 (image=quay.io/ceph/ceph:v19, name=recursing_carver, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:37:00 compute-0 systemd[1]: Started libpod-conmon-e3019ad1065fe7723ae0c4b22e90cea3abd511e527be0cf25d88357a1a2ef6a4.scope.
Dec 15 10:37:00 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023c3d43dafc72ad1f15ce9d5494323ce7a5cbde5b4a0eb1b3737e13af0a0a85/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023c3d43dafc72ad1f15ce9d5494323ce7a5cbde5b4a0eb1b3737e13af0a0a85/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:00 compute-0 podman[85836]: 2025-12-15 10:37:00.498886581 +0000 UTC m=+0.109277988 container init e3019ad1065fe7723ae0c4b22e90cea3abd511e527be0cf25d88357a1a2ef6a4 (image=quay.io/ceph/ceph:v19, name=recursing_carver, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:37:00 compute-0 podman[85836]: 2025-12-15 10:37:00.50502378 +0000 UTC m=+0.115415177 container start e3019ad1065fe7723ae0c4b22e90cea3abd511e527be0cf25d88357a1a2ef6a4 (image=quay.io/ceph/ceph:v19, name=recursing_carver, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 15 10:37:00 compute-0 podman[85836]: 2025-12-15 10:37:00.414230692 +0000 UTC m=+0.024622109 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:37:00 compute-0 podman[85836]: 2025-12-15 10:37:00.509172384 +0000 UTC m=+0.119563781 container attach e3019ad1065fe7723ae0c4b22e90cea3abd511e527be0cf25d88357a1a2ef6a4 (image=quay.io/ceph/ceph:v19, name=recursing_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:37:00 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 6 active+clean, 1 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:00 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Dec 15 10:37:00 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1574363046' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec 15 10:37:01 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Dec 15 10:37:01 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1574363046' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 15 10:37:01 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e20 e20: 2 total, 2 up, 2 in
Dec 15 10:37:01 compute-0 recursing_carver[85852]: enabled application 'rbd' on pool 'volumes'
Dec 15 10:37:01 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 2 up, 2 in
Dec 15 10:37:01 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/3427179052' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 15 10:37:01 compute-0 ceph-mon[74356]: osdmap e19: 2 total, 2 up, 2 in
Dec 15 10:37:01 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/1574363046' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec 15 10:37:01 compute-0 systemd[1]: libpod-e3019ad1065fe7723ae0c4b22e90cea3abd511e527be0cf25d88357a1a2ef6a4.scope: Deactivated successfully.
Dec 15 10:37:01 compute-0 podman[85836]: 2025-12-15 10:37:01.040001451 +0000 UTC m=+0.650392868 container died e3019ad1065fe7723ae0c4b22e90cea3abd511e527be0cf25d88357a1a2ef6a4 (image=quay.io/ceph/ceph:v19, name=recursing_carver, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:37:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-023c3d43dafc72ad1f15ce9d5494323ce7a5cbde5b4a0eb1b3737e13af0a0a85-merged.mount: Deactivated successfully.
Dec 15 10:37:01 compute-0 podman[85836]: 2025-12-15 10:37:01.076559007 +0000 UTC m=+0.686950404 container remove e3019ad1065fe7723ae0c4b22e90cea3abd511e527be0cf25d88357a1a2ef6a4 (image=quay.io/ceph/ceph:v19, name=recursing_carver, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 15 10:37:01 compute-0 systemd[1]: libpod-conmon-e3019ad1065fe7723ae0c4b22e90cea3abd511e527be0cf25d88357a1a2ef6a4.scope: Deactivated successfully.
Dec 15 10:37:01 compute-0 sudo[85833]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:01 compute-0 sudo[85911]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmduuyqydmlfwfdgzdpgkkyihbtmkbjw ; /usr/bin/python3'
Dec 15 10:37:01 compute-0 sudo[85911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:01 compute-0 ceph-mon[74356]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 15 10:37:01 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:37:01 compute-0 python3[85913]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:37:01 compute-0 podman[85914]: 2025-12-15 10:37:01.477541152 +0000 UTC m=+0.085435743 container create 677ec668a8a1cb7adccaea1236635ec918bbaeb92987e6a29bab17da66351629 (image=quay.io/ceph/ceph:v19, name=brave_ellis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:37:01 compute-0 podman[85914]: 2025-12-15 10:37:01.414502887 +0000 UTC m=+0.022397508 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:37:01 compute-0 systemd[1]: Started libpod-conmon-677ec668a8a1cb7adccaea1236635ec918bbaeb92987e6a29bab17da66351629.scope.
Dec 15 10:37:01 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:37:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48ad8f9133afcff4270c5ea0630d93932f9d2cbd6b43c5f42e99bece4488a216/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48ad8f9133afcff4270c5ea0630d93932f9d2cbd6b43c5f42e99bece4488a216/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:01 compute-0 podman[85914]: 2025-12-15 10:37:01.555676982 +0000 UTC m=+0.163571583 container init 677ec668a8a1cb7adccaea1236635ec918bbaeb92987e6a29bab17da66351629 (image=quay.io/ceph/ceph:v19, name=brave_ellis, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:37:01 compute-0 podman[85914]: 2025-12-15 10:37:01.561710058 +0000 UTC m=+0.169604649 container start 677ec668a8a1cb7adccaea1236635ec918bbaeb92987e6a29bab17da66351629 (image=quay.io/ceph/ceph:v19, name=brave_ellis, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 15 10:37:01 compute-0 podman[85914]: 2025-12-15 10:37:01.567138328 +0000 UTC m=+0.175032919 container attach 677ec668a8a1cb7adccaea1236635ec918bbaeb92987e6a29bab17da66351629 (image=quay.io/ceph/ceph:v19, name=brave_ellis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 15 10:37:01 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Dec 15 10:37:01 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1583037157' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec 15 10:37:02 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Dec 15 10:37:02 compute-0 ceph-mon[74356]: pgmap v67: 7 pgs: 6 active+clean, 1 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:02 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/1574363046' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 15 10:37:02 compute-0 ceph-mon[74356]: osdmap e20: 2 total, 2 up, 2 in
Dec 15 10:37:02 compute-0 ceph-mon[74356]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 15 10:37:02 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/1583037157' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec 15 10:37:02 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1583037157' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 15 10:37:02 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e21 e21: 2 total, 2 up, 2 in
Dec 15 10:37:02 compute-0 brave_ellis[85928]: enabled application 'rbd' on pool 'backups'
Dec 15 10:37:02 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e21: 2 total, 2 up, 2 in
Dec 15 10:37:02 compute-0 systemd[1]: libpod-677ec668a8a1cb7adccaea1236635ec918bbaeb92987e6a29bab17da66351629.scope: Deactivated successfully.
Dec 15 10:37:02 compute-0 podman[85914]: 2025-12-15 10:37:02.060989048 +0000 UTC m=+0.668883639 container died 677ec668a8a1cb7adccaea1236635ec918bbaeb92987e6a29bab17da66351629 (image=quay.io/ceph/ceph:v19, name=brave_ellis, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:37:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-48ad8f9133afcff4270c5ea0630d93932f9d2cbd6b43c5f42e99bece4488a216-merged.mount: Deactivated successfully.
Dec 15 10:37:02 compute-0 podman[85914]: 2025-12-15 10:37:02.097655256 +0000 UTC m=+0.705549847 container remove 677ec668a8a1cb7adccaea1236635ec918bbaeb92987e6a29bab17da66351629 (image=quay.io/ceph/ceph:v19, name=brave_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 15 10:37:02 compute-0 systemd[1]: libpod-conmon-677ec668a8a1cb7adccaea1236635ec918bbaeb92987e6a29bab17da66351629.scope: Deactivated successfully.
Dec 15 10:37:02 compute-0 sudo[85911]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:02 compute-0 sudo[85988]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqcbrpbgtcyxpccebvgvvknhzzkbwcry ; /usr/bin/python3'
Dec 15 10:37:02 compute-0 sudo[85988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:02 compute-0 python3[85990]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:37:02 compute-0 podman[85991]: 2025-12-15 10:37:02.453717764 +0000 UTC m=+0.043044195 container create fff91c883d281c3f8cfec69b933a26e175743081489c2441ab87b4c114e98d7b (image=quay.io/ceph/ceph:v19, name=infallible_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec 15 10:37:02 compute-0 systemd[1]: Started libpod-conmon-fff91c883d281c3f8cfec69b933a26e175743081489c2441ab87b4c114e98d7b.scope.
Dec 15 10:37:02 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f276be1d0f4304d0afe789fcaa878a721756d3641e3f6e6be3cfc6f8ada27020/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f276be1d0f4304d0afe789fcaa878a721756d3641e3f6e6be3cfc6f8ada27020/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:02 compute-0 podman[85991]: 2025-12-15 10:37:02.519718121 +0000 UTC m=+0.109044562 container init fff91c883d281c3f8cfec69b933a26e175743081489c2441ab87b4c114e98d7b (image=quay.io/ceph/ceph:v19, name=infallible_napier, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 15 10:37:02 compute-0 podman[85991]: 2025-12-15 10:37:02.524595515 +0000 UTC m=+0.113921946 container start fff91c883d281c3f8cfec69b933a26e175743081489c2441ab87b4c114e98d7b (image=quay.io/ceph/ceph:v19, name=infallible_napier, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 15 10:37:02 compute-0 podman[85991]: 2025-12-15 10:37:02.432864401 +0000 UTC m=+0.022190872 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:37:02 compute-0 podman[85991]: 2025-12-15 10:37:02.52916414 +0000 UTC m=+0.118490571 container attach fff91c883d281c3f8cfec69b933a26e175743081489c2441ab87b4c114e98d7b (image=quay.io/ceph/ceph:v19, name=infallible_napier, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:37:02 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:02 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Dec 15 10:37:02 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2066451463' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec 15 10:37:03 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Dec 15 10:37:03 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/1583037157' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 15 10:37:03 compute-0 ceph-mon[74356]: osdmap e21: 2 total, 2 up, 2 in
Dec 15 10:37:03 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/2066451463' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec 15 10:37:03 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2066451463' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 15 10:37:03 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e22 e22: 2 total, 2 up, 2 in
Dec 15 10:37:03 compute-0 infallible_napier[86005]: enabled application 'rbd' on pool 'images'
Dec 15 10:37:03 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 2 up, 2 in
Dec 15 10:37:03 compute-0 systemd[1]: libpod-fff91c883d281c3f8cfec69b933a26e175743081489c2441ab87b4c114e98d7b.scope: Deactivated successfully.
Dec 15 10:37:03 compute-0 podman[85991]: 2025-12-15 10:37:03.077551791 +0000 UTC m=+0.666878212 container died fff91c883d281c3f8cfec69b933a26e175743081489c2441ab87b4c114e98d7b (image=quay.io/ceph/ceph:v19, name=infallible_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:37:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-f276be1d0f4304d0afe789fcaa878a721756d3641e3f6e6be3cfc6f8ada27020-merged.mount: Deactivated successfully.
Dec 15 10:37:03 compute-0 podman[85991]: 2025-12-15 10:37:03.110079197 +0000 UTC m=+0.699405628 container remove fff91c883d281c3f8cfec69b933a26e175743081489c2441ab87b4c114e98d7b (image=quay.io/ceph/ceph:v19, name=infallible_napier, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:37:03 compute-0 systemd[1]: libpod-conmon-fff91c883d281c3f8cfec69b933a26e175743081489c2441ab87b4c114e98d7b.scope: Deactivated successfully.
Dec 15 10:37:03 compute-0 sudo[85988]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:03 compute-0 sudo[86065]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ettgxxmzfibdnwaloehcxzeubairqorz ; /usr/bin/python3'
Dec 15 10:37:03 compute-0 sudo[86065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:03 compute-0 python3[86067]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:37:03 compute-0 podman[86068]: 2025-12-15 10:37:03.454572867 +0000 UTC m=+0.046044859 container create afbafcb573d0dd8b8691dc44d289e169c44172d28e2e89cbce662345f9417d54 (image=quay.io/ceph/ceph:v19, name=bold_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:37:03 compute-0 systemd[1]: Started libpod-conmon-afbafcb573d0dd8b8691dc44d289e169c44172d28e2e89cbce662345f9417d54.scope.
Dec 15 10:37:03 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:37:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d09204a082c9537d2b8d57757632560d4d61b0552d5eb649574cc33886fd97b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d09204a082c9537d2b8d57757632560d4d61b0552d5eb649574cc33886fd97b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:03 compute-0 podman[86068]: 2025-12-15 10:37:03.430579036 +0000 UTC m=+0.022051048 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:37:03 compute-0 podman[86068]: 2025-12-15 10:37:03.53028273 +0000 UTC m=+0.121754742 container init afbafcb573d0dd8b8691dc44d289e169c44172d28e2e89cbce662345f9417d54 (image=quay.io/ceph/ceph:v19, name=bold_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 15 10:37:03 compute-0 podman[86068]: 2025-12-15 10:37:03.5360859 +0000 UTC m=+0.127557902 container start afbafcb573d0dd8b8691dc44d289e169c44172d28e2e89cbce662345f9417d54 (image=quay.io/ceph/ceph:v19, name=bold_keldysh, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:37:03 compute-0 podman[86068]: 2025-12-15 10:37:03.749495852 +0000 UTC m=+0.340967844 container attach afbafcb573d0dd8b8691dc44d289e169c44172d28e2e89cbce662345f9417d54 (image=quay.io/ceph/ceph:v19, name=bold_keldysh, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:37:03 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Dec 15 10:37:03 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/907949285' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec 15 10:37:04 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Dec 15 10:37:04 compute-0 ceph-mon[74356]: pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:04 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/2066451463' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 15 10:37:04 compute-0 ceph-mon[74356]: osdmap e22: 2 total, 2 up, 2 in
Dec 15 10:37:04 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/907949285' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec 15 10:37:04 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/907949285' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 15 10:37:04 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e23 e23: 2 total, 2 up, 2 in
Dec 15 10:37:04 compute-0 bold_keldysh[86083]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Dec 15 10:37:04 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 2 up, 2 in
Dec 15 10:37:04 compute-0 systemd[1]: libpod-afbafcb573d0dd8b8691dc44d289e169c44172d28e2e89cbce662345f9417d54.scope: Deactivated successfully.
Dec 15 10:37:04 compute-0 podman[86068]: 2025-12-15 10:37:04.171217637 +0000 UTC m=+0.762689629 container died afbafcb573d0dd8b8691dc44d289e169c44172d28e2e89cbce662345f9417d54 (image=quay.io/ceph/ceph:v19, name=bold_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 15 10:37:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d09204a082c9537d2b8d57757632560d4d61b0552d5eb649574cc33886fd97b-merged.mount: Deactivated successfully.
Dec 15 10:37:04 compute-0 podman[86068]: 2025-12-15 10:37:04.288064423 +0000 UTC m=+0.879536415 container remove afbafcb573d0dd8b8691dc44d289e169c44172d28e2e89cbce662345f9417d54 (image=quay.io/ceph/ceph:v19, name=bold_keldysh, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 15 10:37:04 compute-0 sudo[86065]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:04 compute-0 systemd[1]: libpod-conmon-afbafcb573d0dd8b8691dc44d289e169c44172d28e2e89cbce662345f9417d54.scope: Deactivated successfully.
Dec 15 10:37:04 compute-0 sudo[86145]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sevognjbfomvivodiqhnnktqvmjoredq ; /usr/bin/python3'
Dec 15 10:37:04 compute-0 sudo[86145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:04 compute-0 python3[86147]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:37:04 compute-0 podman[86148]: 2025-12-15 10:37:04.623452562 +0000 UTC m=+0.045712538 container create 3e9e339a2d7524c2e5accd0451f61d9a03d19d445222505e30242a81deab462b (image=quay.io/ceph/ceph:v19, name=condescending_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 15 10:37:04 compute-0 systemd[1]: Started libpod-conmon-3e9e339a2d7524c2e5accd0451f61d9a03d19d445222505e30242a81deab462b.scope.
Dec 15 10:37:04 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:37:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1212e98f57615c81df94d9d75ced19d79546e980f5f8d73e3dfa0197ec7754c5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1212e98f57615c81df94d9d75ced19d79546e980f5f8d73e3dfa0197ec7754c5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:04 compute-0 podman[86148]: 2025-12-15 10:37:04.601185059 +0000 UTC m=+0.023445095 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:37:04 compute-0 podman[86148]: 2025-12-15 10:37:04.70005155 +0000 UTC m=+0.122311596 container init 3e9e339a2d7524c2e5accd0451f61d9a03d19d445222505e30242a81deab462b (image=quay.io/ceph/ceph:v19, name=condescending_hofstadter, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 15 10:37:04 compute-0 podman[86148]: 2025-12-15 10:37:04.705305635 +0000 UTC m=+0.127565651 container start 3e9e339a2d7524c2e5accd0451f61d9a03d19d445222505e30242a81deab462b (image=quay.io/ceph/ceph:v19, name=condescending_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 15 10:37:04 compute-0 podman[86148]: 2025-12-15 10:37:04.70912075 +0000 UTC m=+0.131380726 container attach 3e9e339a2d7524c2e5accd0451f61d9a03d19d445222505e30242a81deab462b (image=quay.io/ceph/ceph:v19, name=condescending_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:37:04 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Dec 15 10:37:05 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4004452964' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec 15 10:37:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Dec 15 10:37:05 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4004452964' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 15 10:37:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e24 e24: 2 total, 2 up, 2 in
Dec 15 10:37:05 compute-0 condescending_hofstadter[86163]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Dec 15 10:37:05 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e24: 2 total, 2 up, 2 in
Dec 15 10:37:05 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/907949285' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 15 10:37:05 compute-0 ceph-mon[74356]: osdmap e23: 2 total, 2 up, 2 in
Dec 15 10:37:05 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/4004452964' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec 15 10:37:05 compute-0 systemd[1]: libpod-3e9e339a2d7524c2e5accd0451f61d9a03d19d445222505e30242a81deab462b.scope: Deactivated successfully.
Dec 15 10:37:05 compute-0 podman[86148]: 2025-12-15 10:37:05.178515827 +0000 UTC m=+0.600775803 container died 3e9e339a2d7524c2e5accd0451f61d9a03d19d445222505e30242a81deab462b (image=quay.io/ceph/ceph:v19, name=condescending_hofstadter, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 15 10:37:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-1212e98f57615c81df94d9d75ced19d79546e980f5f8d73e3dfa0197ec7754c5-merged.mount: Deactivated successfully.
Dec 15 10:37:05 compute-0 podman[86148]: 2025-12-15 10:37:05.236516452 +0000 UTC m=+0.658776418 container remove 3e9e339a2d7524c2e5accd0451f61d9a03d19d445222505e30242a81deab462b (image=quay.io/ceph/ceph:v19, name=condescending_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Dec 15 10:37:05 compute-0 systemd[1]: libpod-conmon-3e9e339a2d7524c2e5accd0451f61d9a03d19d445222505e30242a81deab462b.scope: Deactivated successfully.
Dec 15 10:37:05 compute-0 sudo[86145]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:06 compute-0 ceph-mon[74356]: pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:06 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/4004452964' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 15 10:37:06 compute-0 ceph-mon[74356]: osdmap e24: 2 total, 2 up, 2 in
Dec 15 10:37:06 compute-0 python3[86277]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 15 10:37:06 compute-0 ceph-mon[74356]: log_channel(cluster) log [WRN] : Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 15 10:37:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:37:06 compute-0 python3[86348]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765795025.965048-37248-276381523140309/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:37:06 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:07 compute-0 sudo[86448]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpibbjqdishdwisoplcjpiyfqxnpndts ; /usr/bin/python3'
Dec 15 10:37:07 compute-0 sudo[86448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:07 compute-0 python3[86450]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 15 10:37:07 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 15 10:37:07 compute-0 ceph-mon[74356]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 15 10:37:07 compute-0 sudo[86448]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:07 compute-0 sudo[86523]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofiptnsoeilpstshbyzdpaznjighzoaw ; /usr/bin/python3'
Dec 15 10:37:07 compute-0 sudo[86523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:07 compute-0 python3[86525]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765795026.8659422-37262-54875614040681/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=d0fada96f0c09b2cc2a1cb3e93b0f194be0fe7b0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:37:07 compute-0 sudo[86523]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:07 compute-0 sudo[86573]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywwbtzoomxkwokpyhokjgzihvgambxxl ; /usr/bin/python3'
Dec 15 10:37:07 compute-0 sudo[86573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:07 compute-0 python3[86575]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:37:07 compute-0 podman[86576]: 2025-12-15 10:37:07.950030784 +0000 UTC m=+0.040621460 container create 94c01e1af3a157dff6bc780eb1916cf819729c7f5f8616ea9bafaf761675497e (image=quay.io/ceph/ceph:v19, name=mystifying_moore, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec 15 10:37:07 compute-0 systemd[1]: Started libpod-conmon-94c01e1af3a157dff6bc780eb1916cf819729c7f5f8616ea9bafaf761675497e.scope.
Dec 15 10:37:08 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b27b0d25d82620fa71cfcd85e9f92544e4c8141732e8c8519ad5f26e02b9138/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b27b0d25d82620fa71cfcd85e9f92544e4c8141732e8c8519ad5f26e02b9138/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b27b0d25d82620fa71cfcd85e9f92544e4c8141732e8c8519ad5f26e02b9138/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:08 compute-0 podman[86576]: 2025-12-15 10:37:07.932170022 +0000 UTC m=+0.022760718 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:37:08 compute-0 podman[86576]: 2025-12-15 10:37:08.028804692 +0000 UTC m=+0.119395388 container init 94c01e1af3a157dff6bc780eb1916cf819729c7f5f8616ea9bafaf761675497e (image=quay.io/ceph/ceph:v19, name=mystifying_moore, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:37:08 compute-0 podman[86576]: 2025-12-15 10:37:08.03568942 +0000 UTC m=+0.126280096 container start 94c01e1af3a157dff6bc780eb1916cf819729c7f5f8616ea9bafaf761675497e (image=quay.io/ceph/ceph:v19, name=mystifying_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 15 10:37:08 compute-0 podman[86576]: 2025-12-15 10:37:08.039557787 +0000 UTC m=+0.130148463 container attach 94c01e1af3a157dff6bc780eb1916cf819729c7f5f8616ea9bafaf761675497e (image=quay.io/ceph/ceph:v19, name=mystifying_moore, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 15 10:37:08 compute-0 ceph-mon[74356]: pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:08 compute-0 ceph-mon[74356]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 15 10:37:08 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 15 10:37:08 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/765014972' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 15 10:37:08 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/765014972' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 15 10:37:08 compute-0 mystifying_moore[86591]: 
Dec 15 10:37:08 compute-0 mystifying_moore[86591]: [global]
Dec 15 10:37:08 compute-0 mystifying_moore[86591]:         fsid = 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:37:08 compute-0 mystifying_moore[86591]:         mon_host = 192.168.122.100
Dec 15 10:37:08 compute-0 systemd[1]: libpod-94c01e1af3a157dff6bc780eb1916cf819729c7f5f8616ea9bafaf761675497e.scope: Deactivated successfully.
Dec 15 10:37:08 compute-0 podman[86616]: 2025-12-15 10:37:08.459663978 +0000 UTC m=+0.030522741 container died 94c01e1af3a157dff6bc780eb1916cf819729c7f5f8616ea9bafaf761675497e (image=quay.io/ceph/ceph:v19, name=mystifying_moore, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:37:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b27b0d25d82620fa71cfcd85e9f92544e4c8141732e8c8519ad5f26e02b9138-merged.mount: Deactivated successfully.
Dec 15 10:37:08 compute-0 podman[86616]: 2025-12-15 10:37:08.494593149 +0000 UTC m=+0.065451892 container remove 94c01e1af3a157dff6bc780eb1916cf819729c7f5f8616ea9bafaf761675497e (image=quay.io/ceph/ceph:v19, name=mystifying_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:37:08 compute-0 systemd[1]: libpod-conmon-94c01e1af3a157dff6bc780eb1916cf819729c7f5f8616ea9bafaf761675497e.scope: Deactivated successfully.
Dec 15 10:37:08 compute-0 sudo[86573]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:08 compute-0 sudo[86653]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxevuerpcxkpiuxloupyuojjupafbpuf ; /usr/bin/python3'
Dec 15 10:37:08 compute-0 sudo[86653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:08 compute-0 python3[86655]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:37:08 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:08 compute-0 podman[86656]: 2025-12-15 10:37:08.853595318 +0000 UTC m=+0.042476830 container create 65d80d20812fc6b50d796502e5b6a473dc26d351491fe82e6aeb0b26f030d3e8 (image=quay.io/ceph/ceph:v19, name=interesting_wilson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 15 10:37:08 compute-0 systemd[1]: Started libpod-conmon-65d80d20812fc6b50d796502e5b6a473dc26d351491fe82e6aeb0b26f030d3e8.scope.
Dec 15 10:37:08 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b5aeb1c089a5b8825d7ec70b90888972f8b9db50d3d37af6e6a3ac3f9afc97/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b5aeb1c089a5b8825d7ec70b90888972f8b9db50d3d37af6e6a3ac3f9afc97/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b5aeb1c089a5b8825d7ec70b90888972f8b9db50d3d37af6e6a3ac3f9afc97/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:08 compute-0 podman[86656]: 2025-12-15 10:37:08.83547334 +0000 UTC m=+0.024354872 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:37:09 compute-0 podman[86656]: 2025-12-15 10:37:09.010800664 +0000 UTC m=+0.199682196 container init 65d80d20812fc6b50d796502e5b6a473dc26d351491fe82e6aeb0b26f030d3e8 (image=quay.io/ceph/ceph:v19, name=interesting_wilson, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:37:09 compute-0 podman[86656]: 2025-12-15 10:37:09.017018086 +0000 UTC m=+0.205899598 container start 65d80d20812fc6b50d796502e5b6a473dc26d351491fe82e6aeb0b26f030d3e8 (image=quay.io/ceph/ceph:v19, name=interesting_wilson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:37:09 compute-0 podman[86656]: 2025-12-15 10:37:09.025691744 +0000 UTC m=+0.214573286 container attach 65d80d20812fc6b50d796502e5b6a473dc26d351491fe82e6aeb0b26f030d3e8 (image=quay.io/ceph/ceph:v19, name=interesting_wilson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:37:09 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/765014972' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 15 10:37:09 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/765014972' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 15 10:37:09 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Dec 15 10:37:09 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/53667251' entity='client.admin' 
Dec 15 10:37:09 compute-0 interesting_wilson[86671]: set ssl_option
Dec 15 10:37:09 compute-0 systemd[1]: libpod-65d80d20812fc6b50d796502e5b6a473dc26d351491fe82e6aeb0b26f030d3e8.scope: Deactivated successfully.
Dec 15 10:37:09 compute-0 podman[86656]: 2025-12-15 10:37:09.633932621 +0000 UTC m=+0.822814133 container died 65d80d20812fc6b50d796502e5b6a473dc26d351491fe82e6aeb0b26f030d3e8 (image=quay.io/ceph/ceph:v19, name=interesting_wilson, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:37:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-10b5aeb1c089a5b8825d7ec70b90888972f8b9db50d3d37af6e6a3ac3f9afc97-merged.mount: Deactivated successfully.
Dec 15 10:37:09 compute-0 podman[86656]: 2025-12-15 10:37:09.779459716 +0000 UTC m=+0.968341228 container remove 65d80d20812fc6b50d796502e5b6a473dc26d351491fe82e6aeb0b26f030d3e8 (image=quay.io/ceph/ceph:v19, name=interesting_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 15 10:37:09 compute-0 sudo[86653]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:09 compute-0 systemd[1]: libpod-conmon-65d80d20812fc6b50d796502e5b6a473dc26d351491fe82e6aeb0b26f030d3e8.scope: Deactivated successfully.
Dec 15 10:37:09 compute-0 sudo[86731]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haomnsjnzvhvttgpoznmespxienxsnco ; /usr/bin/python3'
Dec 15 10:37:09 compute-0 sudo[86731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:10 compute-0 python3[86733]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:37:10 compute-0 podman[86734]: 2025-12-15 10:37:10.154842206 +0000 UTC m=+0.046294944 container create 1f9401d01fbdba11fe36f15387d8eb6a741d8861068e1ade16394b363ff8fbb3 (image=quay.io/ceph/ceph:v19, name=magical_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 15 10:37:10 compute-0 systemd[1]: Started libpod-conmon-1f9401d01fbdba11fe36f15387d8eb6a741d8861068e1ade16394b363ff8fbb3.scope.
Dec 15 10:37:10 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:37:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1fdbfea0dc957051a062adb72b9c55f3bc549dc2295bbff22e6f7498a075506/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1fdbfea0dc957051a062adb72b9c55f3bc549dc2295bbff22e6f7498a075506/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1fdbfea0dc957051a062adb72b9c55f3bc549dc2295bbff22e6f7498a075506/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:10 compute-0 podman[86734]: 2025-12-15 10:37:10.215799284 +0000 UTC m=+0.107252042 container init 1f9401d01fbdba11fe36f15387d8eb6a741d8861068e1ade16394b363ff8fbb3 (image=quay.io/ceph/ceph:v19, name=magical_yonath, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 15 10:37:10 compute-0 podman[86734]: 2025-12-15 10:37:10.220467172 +0000 UTC m=+0.111919900 container start 1f9401d01fbdba11fe36f15387d8eb6a741d8861068e1ade16394b363ff8fbb3 (image=quay.io/ceph/ceph:v19, name=magical_yonath, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:37:10 compute-0 podman[86734]: 2025-12-15 10:37:10.223397933 +0000 UTC m=+0.114850661 container attach 1f9401d01fbdba11fe36f15387d8eb6a741d8861068e1ade16394b363ff8fbb3 (image=quay.io/ceph/ceph:v19, name=magical_yonath, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 15 10:37:10 compute-0 podman[86734]: 2025-12-15 10:37:10.136009628 +0000 UTC m=+0.027462386 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='client.14225 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 15 10:37:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 15 10:37:10 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Dec 15 10:37:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec 15 10:37:10 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:10 compute-0 magical_yonath[86749]: Scheduled rgw.rgw update...
Dec 15 10:37:10 compute-0 magical_yonath[86749]: Scheduled ingress.rgw.default update...
Dec 15 10:37:10 compute-0 ceph-mon[74356]: pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:10 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/53667251' entity='client.admin' 
Dec 15 10:37:10 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:10 compute-0 systemd[1]: libpod-1f9401d01fbdba11fe36f15387d8eb6a741d8861068e1ade16394b363ff8fbb3.scope: Deactivated successfully.
Dec 15 10:37:10 compute-0 podman[86734]: 2025-12-15 10:37:10.636968174 +0000 UTC m=+0.528420912 container died 1f9401d01fbdba11fe36f15387d8eb6a741d8861068e1ade16394b363ff8fbb3 (image=quay.io/ceph/ceph:v19, name=magical_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [balancer INFO root] Optimize plan auto_2025-12-15_10:37:10
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [balancer INFO root] do_upmap
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'images', '.mgr', 'vms']
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 15 10:37:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1fdbfea0dc957051a062adb72b9c55f3bc549dc2295bbff22e6f7498a075506-merged.mount: Deactivated successfully.
Dec 15 10:37:10 compute-0 podman[86734]: 2025-12-15 10:37:10.680533413 +0000 UTC m=+0.571986141 container remove 1f9401d01fbdba11fe36f15387d8eb6a741d8861068e1ade16394b363ff8fbb3 (image=quay.io/ceph/ceph:v19, name=magical_yonath, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:37:10 compute-0 systemd[1]: libpod-conmon-1f9401d01fbdba11fe36f15387d8eb6a741d8861068e1ade16394b363ff8fbb3.scope: Deactivated successfully.
Dec 15 10:37:10 compute-0 sudo[86731]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 15 10:37:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Dec 15 10:37:10 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 15 10:37:10 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 15 10:37:11 compute-0 python3[86861]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 15 10:37:11 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:37:11 compute-0 python3[86932]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765795030.8295915-37281-174196557064305/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:37:11 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Dec 15 10:37:11 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 15 10:37:11 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e25 e25: 2 total, 2 up, 2 in
Dec 15 10:37:11 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e25: 2 total, 2 up, 2 in
Dec 15 10:37:11 compute-0 ceph-mgr[74651]: [progress INFO root] update: starting ev b9b37b78-ebf3-447b-8552-20ff1a4f0c05 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 15 10:37:11 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Dec 15 10:37:11 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec 15 10:37:11 compute-0 ceph-mon[74356]: from='client.14225 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:37:11 compute-0 ceph-mon[74356]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 15 10:37:11 compute-0 ceph-mon[74356]: Saving service ingress.rgw.default spec with placement count:2
Dec 15 10:37:11 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:11 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec 15 10:37:11 compute-0 sudo[86980]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdfvvhuoofhojjxikaoummdqygqkpxqq ; /usr/bin/python3'
Dec 15 10:37:11 compute-0 sudo[86980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:12 compute-0 python3[86982]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:37:12 compute-0 podman[86983]: 2025-12-15 10:37:12.139458731 +0000 UTC m=+0.047461468 container create b91e033725f3ba3903b91de1a54bfa5bcb5fae12fe692efa94e4f34af32a19fd (image=quay.io/ceph/ceph:v19, name=confident_payne, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:37:12 compute-0 systemd[1]: Started libpod-conmon-b91e033725f3ba3903b91de1a54bfa5bcb5fae12fe692efa94e4f34af32a19fd.scope.
Dec 15 10:37:12 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:37:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85b60a6432161fc83a0bad77cceb96d63ae6a9334489f838491f7d15db7149ec/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85b60a6432161fc83a0bad77cceb96d63ae6a9334489f838491f7d15db7149ec/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85b60a6432161fc83a0bad77cceb96d63ae6a9334489f838491f7d15db7149ec/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:12 compute-0 podman[86983]: 2025-12-15 10:37:12.117179867 +0000 UTC m=+0.025182624 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:37:12 compute-0 podman[86983]: 2025-12-15 10:37:12.220906921 +0000 UTC m=+0.128909688 container init b91e033725f3ba3903b91de1a54bfa5bcb5fae12fe692efa94e4f34af32a19fd (image=quay.io/ceph/ceph:v19, name=confident_payne, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 15 10:37:12 compute-0 podman[86983]: 2025-12-15 10:37:12.226784533 +0000 UTC m=+0.134787270 container start b91e033725f3ba3903b91de1a54bfa5bcb5fae12fe692efa94e4f34af32a19fd (image=quay.io/ceph/ceph:v19, name=confident_payne, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:37:12 compute-0 podman[86983]: 2025-12-15 10:37:12.370798206 +0000 UTC m=+0.278800963 container attach b91e033725f3ba3903b91de1a54bfa5bcb5fae12fe692efa94e4f34af32a19fd (image=quay.io/ceph/ceph:v19, name=confident_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Dec 15 10:37:12 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='client.14227 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:37:12 compute-0 ceph-mgr[74651]: [cephadm INFO root] Saving service node-exporter spec with placement *
Dec 15 10:37:12 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Dec 15 10:37:12 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec 15 10:37:12 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Dec 15 10:37:12 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:12 compute-0 ceph-mgr[74651]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Dec 15 10:37:12 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Dec 15 10:37:12 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec 15 10:37:12 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 15 10:37:12 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e26 e26: 2 total, 2 up, 2 in
Dec 15 10:37:12 compute-0 ceph-mon[74356]: pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:12 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 15 10:37:12 compute-0 ceph-mon[74356]: osdmap e25: 2 total, 2 up, 2 in
Dec 15 10:37:12 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec 15 10:37:12 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e26: 2 total, 2 up, 2 in
Dec 15 10:37:12 compute-0 ceph-mgr[74651]: [progress INFO root] update: starting ev ef600f71-4a93-4a0e-aa2a-8170cb8dd6e3 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 15 10:37:12 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:12 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Dec 15 10:37:12 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec 15 10:37:12 compute-0 ceph-mgr[74651]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Dec 15 10:37:12 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Dec 15 10:37:12 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Dec 15 10:37:12 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:12 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Dec 15 10:37:12 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 15 10:37:12 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Dec 15 10:37:12 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 15 10:37:12 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:12 compute-0 ceph-mgr[74651]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Dec 15 10:37:12 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Dec 15 10:37:12 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec 15 10:37:12 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:12 compute-0 confident_payne[86998]: Scheduled node-exporter update...
Dec 15 10:37:12 compute-0 confident_payne[86998]: Scheduled grafana update...
Dec 15 10:37:12 compute-0 confident_payne[86998]: Scheduled prometheus update...
Dec 15 10:37:12 compute-0 confident_payne[86998]: Scheduled alertmanager update...
Dec 15 10:37:12 compute-0 systemd[1]: libpod-b91e033725f3ba3903b91de1a54bfa5bcb5fae12fe692efa94e4f34af32a19fd.scope: Deactivated successfully.
Dec 15 10:37:12 compute-0 podman[86983]: 2025-12-15 10:37:12.974322784 +0000 UTC m=+0.882325521 container died b91e033725f3ba3903b91de1a54bfa5bcb5fae12fe692efa94e4f34af32a19fd (image=quay.io/ceph/ceph:v19, name=confident_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:37:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-85b60a6432161fc83a0bad77cceb96d63ae6a9334489f838491f7d15db7149ec-merged.mount: Deactivated successfully.
Dec 15 10:37:13 compute-0 podman[86983]: 2025-12-15 10:37:13.029223296 +0000 UTC m=+0.937226033 container remove b91e033725f3ba3903b91de1a54bfa5bcb5fae12fe692efa94e4f34af32a19fd (image=quay.io/ceph/ceph:v19, name=confident_payne, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Dec 15 10:37:13 compute-0 systemd[1]: libpod-conmon-b91e033725f3ba3903b91de1a54bfa5bcb5fae12fe692efa94e4f34af32a19fd.scope: Deactivated successfully.
Dec 15 10:37:13 compute-0 sudo[86980]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:13 compute-0 sudo[87056]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shigaugtqypfwmcbhgbmuwnrcboeigdh ; /usr/bin/python3'
Dec 15 10:37:13 compute-0 sudo[87056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:13 compute-0 python3[87058]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:37:13 compute-0 podman[87059]: 2025-12-15 10:37:13.615284992 +0000 UTC m=+0.039955670 container create 45666baf1b6f5fe0c0717da296fbd41c54b3c67de6329be55c36be2813945092 (image=quay.io/ceph/ceph:v19, name=infallible_davinci, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Dec 15 10:37:13 compute-0 systemd[1]: Started libpod-conmon-45666baf1b6f5fe0c0717da296fbd41c54b3c67de6329be55c36be2813945092.scope.
Dec 15 10:37:13 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:37:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e33d75ede0b4c24de86c945c673c1941e946b32b3c768d6a446eaa39285539/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e33d75ede0b4c24de86c945c673c1941e946b32b3c768d6a446eaa39285539/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e33d75ede0b4c24de86c945c673c1941e946b32b3c768d6a446eaa39285539/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:13 compute-0 podman[87059]: 2025-12-15 10:37:13.597923885 +0000 UTC m=+0.022594583 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:37:13 compute-0 podman[87059]: 2025-12-15 10:37:13.726130283 +0000 UTC m=+0.150800991 container init 45666baf1b6f5fe0c0717da296fbd41c54b3c67de6329be55c36be2813945092 (image=quay.io/ceph/ceph:v19, name=infallible_davinci, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:37:13 compute-0 podman[87059]: 2025-12-15 10:37:13.731084769 +0000 UTC m=+0.155755457 container start 45666baf1b6f5fe0c0717da296fbd41c54b3c67de6329be55c36be2813945092 (image=quay.io/ceph/ceph:v19, name=infallible_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 15 10:37:13 compute-0 ceph-mon[74356]: from='client.14227 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:37:13 compute-0 ceph-mon[74356]: Saving service node-exporter spec with placement *
Dec 15 10:37:13 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:13 compute-0 ceph-mon[74356]: Saving service grafana spec with placement compute-0;count:1
Dec 15 10:37:13 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 15 10:37:13 compute-0 ceph-mon[74356]: osdmap e26: 2 total, 2 up, 2 in
Dec 15 10:37:13 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:13 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec 15 10:37:13 compute-0 ceph-mon[74356]: Saving service prometheus spec with placement compute-0;count:1
Dec 15 10:37:13 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 15 10:37:13 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 15 10:37:13 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:13 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:13 compute-0 podman[87059]: 2025-12-15 10:37:13.744859098 +0000 UTC m=+0.169529796 container attach 45666baf1b6f5fe0c0717da296fbd41c54b3c67de6329be55c36be2813945092 (image=quay.io/ceph/ceph:v19, name=infallible_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:37:13 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Dec 15 10:37:13 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 15 10:37:13 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 15 10:37:13 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 15 10:37:13 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e27 e27: 2 total, 2 up, 2 in
Dec 15 10:37:13 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e27: 2 total, 2 up, 2 in
Dec 15 10:37:13 compute-0 ceph-mgr[74651]: [progress INFO root] update: starting ev 089b4a17-2437-4da6-9034-58519eb451cd (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 15 10:37:13 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Dec 15 10:37:13 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec 15 10:37:13 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 27 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=27 pruub=14.123393059s) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active pruub 47.568305969s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:13 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 27 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=27 pruub=14.123393059s) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown pruub 47.568305969s@ mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Dec 15 10:37:14 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3133683041' entity='client.admin' 
Dec 15 10:37:14 compute-0 systemd[1]: libpod-45666baf1b6f5fe0c0717da296fbd41c54b3c67de6329be55c36be2813945092.scope: Deactivated successfully.
Dec 15 10:37:14 compute-0 podman[87059]: 2025-12-15 10:37:14.233616723 +0000 UTC m=+0.658287421 container died 45666baf1b6f5fe0c0717da296fbd41c54b3c67de6329be55c36be2813945092 (image=quay.io/ceph/ceph:v19, name=infallible_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 15 10:37:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-60e33d75ede0b4c24de86c945c673c1941e946b32b3c768d6a446eaa39285539-merged.mount: Deactivated successfully.
Dec 15 10:37:14 compute-0 podman[87059]: 2025-12-15 10:37:14.334482197 +0000 UTC m=+0.759152875 container remove 45666baf1b6f5fe0c0717da296fbd41c54b3c67de6329be55c36be2813945092 (image=quay.io/ceph/ceph:v19, name=infallible_davinci, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 15 10:37:14 compute-0 systemd[1]: libpod-conmon-45666baf1b6f5fe0c0717da296fbd41c54b3c67de6329be55c36be2813945092.scope: Deactivated successfully.
Dec 15 10:37:14 compute-0 sudo[87056]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:14 compute-0 sudo[87136]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axfbkzvrompylgrjiqyucbvllpeqnigw ; /usr/bin/python3'
Dec 15 10:37:14 compute-0 sudo[87136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:14 compute-0 python3[87138]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:37:14 compute-0 podman[87139]: 2025-12-15 10:37:14.736380011 +0000 UTC m=+0.051522598 container create ae9d1385aacfc0b5b9dbe0cd9de53655a9f240964cc11be4e1d1c8b6fa730056 (image=quay.io/ceph/ceph:v19, name=xenodochial_antonelli, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:37:14 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Dec 15 10:37:14 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 15 10:37:14 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e28 e28: 2 total, 2 up, 2 in
Dec 15 10:37:14 compute-0 systemd[1]: Started libpod-conmon-ae9d1385aacfc0b5b9dbe0cd9de53655a9f240964cc11be4e1d1c8b6fa730056.scope.
Dec 15 10:37:14 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e28: 2 total, 2 up, 2 in
Dec 15 10:37:14 compute-0 ceph-mgr[74651]: [progress INFO root] update: starting ev c586e8de-704b-42fe-b8c2-c9583d6c95ab (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 15 10:37:14 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0)
Dec 15 10:37:14 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 15 10:37:14 compute-0 ceph-mon[74356]: pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:14 compute-0 ceph-mon[74356]: Saving service alertmanager spec with placement compute-0;count:1
Dec 15 10:37:14 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 15 10:37:14 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 15 10:37:14 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 15 10:37:14 compute-0 ceph-mon[74356]: osdmap e27: 2 total, 2 up, 2 in
Dec 15 10:37:14 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec 15 10:37:14 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/3133683041' entity='client.admin' 
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.1c( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.1e( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.1d( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.1f( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.1b( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.1a( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.9( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.8( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.4( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.3( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.2( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.1( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.5( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.6( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.7( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.a( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.b( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.d( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.f( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.c( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.10( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.e( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.11( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.12( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.13( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.14( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.15( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.16( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.17( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.18( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.19( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.1b( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.1c( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.1e( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.1d( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.1a( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.8( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.9( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.4( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.5( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.1( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.3( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.6( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.0( empty local-lis/les=27/28 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.1f( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.7( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.a( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.b( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.d( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.f( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.10( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.e( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.c( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.11( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.12( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.14( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.2( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.15( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.19( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.13( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.16( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.17( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 28 pg[3.18( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:14 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:37:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/304277d42a40616b1e2f03a574561ad4cf894401ee3b380861dcec6a7d161e56/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/304277d42a40616b1e2f03a574561ad4cf894401ee3b380861dcec6a7d161e56/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/304277d42a40616b1e2f03a574561ad4cf894401ee3b380861dcec6a7d161e56/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:14 compute-0 podman[87139]: 2025-12-15 10:37:14.713342209 +0000 UTC m=+0.028484806 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:37:14 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v83: 69 pgs: 1 peering, 62 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:14 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Dec 15 10:37:14 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 15 10:37:14 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Dec 15 10:37:14 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 15 10:37:14 compute-0 podman[87139]: 2025-12-15 10:37:14.815103931 +0000 UTC m=+0.130246538 container init ae9d1385aacfc0b5b9dbe0cd9de53655a9f240964cc11be4e1d1c8b6fa730056 (image=quay.io/ceph/ceph:v19, name=xenodochial_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:37:14 compute-0 podman[87139]: 2025-12-15 10:37:14.821478853 +0000 UTC m=+0.136621430 container start ae9d1385aacfc0b5b9dbe0cd9de53655a9f240964cc11be4e1d1c8b6fa730056 (image=quay.io/ceph/ceph:v19, name=xenodochial_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 15 10:37:14 compute-0 podman[87139]: 2025-12-15 10:37:14.825300785 +0000 UTC m=+0.140443392 container attach ae9d1385aacfc0b5b9dbe0cd9de53655a9f240964cc11be4e1d1c8b6fa730056 (image=quay.io/ceph/ceph:v19, name=xenodochial_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 15 10:37:15 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Dec 15 10:37:15 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1565571558' entity='client.admin' 
Dec 15 10:37:15 compute-0 systemd[1]: libpod-ae9d1385aacfc0b5b9dbe0cd9de53655a9f240964cc11be4e1d1c8b6fa730056.scope: Deactivated successfully.
Dec 15 10:37:15 compute-0 podman[87139]: 2025-12-15 10:37:15.247533024 +0000 UTC m=+0.562675621 container died ae9d1385aacfc0b5b9dbe0cd9de53655a9f240964cc11be4e1d1c8b6fa730056 (image=quay.io/ceph/ceph:v19, name=xenodochial_antonelli, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:37:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-304277d42a40616b1e2f03a574561ad4cf894401ee3b380861dcec6a7d161e56-merged.mount: Deactivated successfully.
Dec 15 10:37:15 compute-0 podman[87139]: 2025-12-15 10:37:15.299602059 +0000 UTC m=+0.614744636 container remove ae9d1385aacfc0b5b9dbe0cd9de53655a9f240964cc11be4e1d1c8b6fa730056 (image=quay.io/ceph/ceph:v19, name=xenodochial_antonelli, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec 15 10:37:15 compute-0 systemd[1]: libpod-conmon-ae9d1385aacfc0b5b9dbe0cd9de53655a9f240964cc11be4e1d1c8b6fa730056.scope: Deactivated successfully.
Dec 15 10:37:15 compute-0 sudo[87136]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:15 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.1c deep-scrub starts
Dec 15 10:37:15 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.1c deep-scrub ok
Dec 15 10:37:15 compute-0 sudo[87214]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erqhmcjxnbpcdawvoyqyfmelgnrlcnzi ; /usr/bin/python3'
Dec 15 10:37:15 compute-0 sudo[87214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:15 compute-0 python3[87216]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:37:15 compute-0 podman[87217]: 2025-12-15 10:37:15.661890813 +0000 UTC m=+0.041654183 container create e1198cae810b90c942517cdb5697e719905dd2dc1036661eb6771a72721096a9 (image=quay.io/ceph/ceph:v19, name=serene_meitner, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:37:15 compute-0 systemd[1]: Started libpod-conmon-e1198cae810b90c942517cdb5697e719905dd2dc1036661eb6771a72721096a9.scope.
Dec 15 10:37:15 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eda3622e6f621d967bbd149ba506502b9d119325eb7e1d7e7a6faf7412487ce/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eda3622e6f621d967bbd149ba506502b9d119325eb7e1d7e7a6faf7412487ce/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eda3622e6f621d967bbd149ba506502b9d119325eb7e1d7e7a6faf7412487ce/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:15 compute-0 podman[87217]: 2025-12-15 10:37:15.64382871 +0000 UTC m=+0.023592100 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:37:15 compute-0 podman[87217]: 2025-12-15 10:37:15.740989346 +0000 UTC m=+0.120752746 container init e1198cae810b90c942517cdb5697e719905dd2dc1036661eb6771a72721096a9 (image=quay.io/ceph/ceph:v19, name=serene_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 15 10:37:15 compute-0 podman[87217]: 2025-12-15 10:37:15.746398717 +0000 UTC m=+0.126162087 container start e1198cae810b90c942517cdb5697e719905dd2dc1036661eb6771a72721096a9 (image=quay.io/ceph/ceph:v19, name=serene_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:37:15 compute-0 podman[87217]: 2025-12-15 10:37:15.762401575 +0000 UTC m=+0.142164975 container attach e1198cae810b90c942517cdb5697e719905dd2dc1036661eb6771a72721096a9 (image=quay.io/ceph/ceph:v19, name=serene_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:37:15 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Dec 15 10:37:15 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec 15 10:37:15 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 15 10:37:15 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 15 10:37:15 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e29 e29: 2 total, 2 up, 2 in
Dec 15 10:37:15 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e29: 2 total, 2 up, 2 in
Dec 15 10:37:15 compute-0 ceph-mgr[74651]: [progress INFO root] update: starting ev 3068411c-67b6-43ee-8cbd-a5f92e848b45 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec 15 10:37:15 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Dec 15 10:37:15 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec 15 10:37:15 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 29 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=29 pruub=14.204547882s) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active pruub 49.657112122s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:15 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 29 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=29 pruub=13.126070023s) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active pruub 48.578643799s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:15 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 29 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=29 pruub=14.204547882s) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown pruub 49.657112122s@ mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:15 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 29 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=29 pruub=13.126070023s) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown pruub 48.578643799s@ mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:15 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 15 10:37:15 compute-0 ceph-mon[74356]: osdmap e28: 2 total, 2 up, 2 in
Dec 15 10:37:15 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 15 10:37:15 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 15 10:37:15 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 15 10:37:15 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/1565571558' entity='client.admin' 
Dec 15 10:37:15 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec 15 10:37:15 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 15 10:37:15 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 15 10:37:15 compute-0 ceph-mon[74356]: osdmap e29: 2 total, 2 up, 2 in
Dec 15 10:37:15 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec 15 10:37:15 compute-0 ceph-mgr[74651]: [progress WARNING root] Starting Global Recovery Event,125 pgs not in active + clean state
Dec 15 10:37:16 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Dec 15 10:37:16 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2651836682' entity='client.admin' 
Dec 15 10:37:16 compute-0 systemd[1]: libpod-e1198cae810b90c942517cdb5697e719905dd2dc1036661eb6771a72721096a9.scope: Deactivated successfully.
Dec 15 10:37:16 compute-0 podman[87257]: 2025-12-15 10:37:16.173285696 +0000 UTC m=+0.026259396 container died e1198cae810b90c942517cdb5697e719905dd2dc1036661eb6771a72721096a9 (image=quay.io/ceph/ceph:v19, name=serene_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 15 10:37:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-6eda3622e6f621d967bbd149ba506502b9d119325eb7e1d7e7a6faf7412487ce-merged.mount: Deactivated successfully.
Dec 15 10:37:16 compute-0 podman[87257]: 2025-12-15 10:37:16.212691757 +0000 UTC m=+0.065665417 container remove e1198cae810b90c942517cdb5697e719905dd2dc1036661eb6771a72721096a9 (image=quay.io/ceph/ceph:v19, name=serene_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 15 10:37:16 compute-0 systemd[1]: libpod-conmon-e1198cae810b90c942517cdb5697e719905dd2dc1036661eb6771a72721096a9.scope: Deactivated successfully.
Dec 15 10:37:16 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:37:16 compute-0 sudo[87214]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:16 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Dec 15 10:37:16 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Dec 15 10:37:16 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:37:16 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:16 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:37:16 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:16 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:37:16 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:16 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:37:16 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:16 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec 15 10:37:16 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 15 10:37:16 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:37:16 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:37:16 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 15 10:37:16 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 15 10:37:16 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec 15 10:37:16 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec 15 10:37:16 compute-0 sudo[87296]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tymllstgujfxzqggdqcgsevfslxzcqvj ; /usr/bin/python3'
Dec 15 10:37:16 compute-0 sudo[87296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:16 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Dec 15 10:37:16 compute-0 python3[87298]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:37:16 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v85: 131 pgs: 1 peering, 124 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:16 compute-0 sudo[87296]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:16 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec 15 10:37:16 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 15 10:37:16 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 15 10:37:16 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e30 e30: 2 total, 2 up, 2 in
Dec 15 10:37:16 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e30: 2 total, 2 up, 2 in
Dec 15 10:37:16 compute-0 ceph-mgr[74651]: [progress INFO root] update: starting ev 8d6bd704-af3a-4b77-b2b0-851ce853f89e (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 15 10:37:16 compute-0 ceph-mgr[74651]: [progress INFO root] complete: finished ev b9b37b78-ebf3-447b-8552-20ff1a4f0c05 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 15 10:37:16 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event b9b37b78-ebf3-447b-8552-20ff1a4f0c05 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Dec 15 10:37:16 compute-0 ceph-mgr[74651]: [progress INFO root] complete: finished ev ef600f71-4a93-4a0e-aa2a-8170cb8dd6e3 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 15 10:37:16 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event ef600f71-4a93-4a0e-aa2a-8170cb8dd6e3 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Dec 15 10:37:16 compute-0 ceph-mgr[74651]: [progress INFO root] complete: finished ev 089b4a17-2437-4da6-9034-58519eb451cd (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 15 10:37:16 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event 089b4a17-2437-4da6-9034-58519eb451cd (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Dec 15 10:37:16 compute-0 ceph-mgr[74651]: [progress INFO root] complete: finished ev c586e8de-704b-42fe-b8c2-c9583d6c95ab (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 15 10:37:16 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event c586e8de-704b-42fe-b8c2-c9583d6c95ab (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Dec 15 10:37:16 compute-0 ceph-mgr[74651]: [progress INFO root] complete: finished ev 3068411c-67b6-43ee-8cbd-a5f92e848b45 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec 15 10:37:16 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event 3068411c-67b6-43ee-8cbd-a5f92e848b45 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Dec 15 10:37:16 compute-0 ceph-mgr[74651]: [progress INFO root] complete: finished ev 8d6bd704-af3a-4b77-b2b0-851ce853f89e (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 15 10:37:16 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event 8d6bd704-af3a-4b77-b2b0-851ce853f89e (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Dec 15 10:37:16 compute-0 ceph-mon[74356]: pgmap v83: 69 pgs: 1 peering, 62 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:16 compute-0 ceph-mon[74356]: 2.1e scrub starts
Dec 15 10:37:16 compute-0 ceph-mon[74356]: 2.1e scrub ok
Dec 15 10:37:16 compute-0 ceph-mon[74356]: 3.1c deep-scrub starts
Dec 15 10:37:16 compute-0 ceph-mon[74356]: 3.1c deep-scrub ok
Dec 15 10:37:16 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/2651836682' entity='client.admin' 
Dec 15 10:37:16 compute-0 ceph-mon[74356]: 3.1b scrub starts
Dec 15 10:37:16 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:16 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:16 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:16 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:16 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 15 10:37:16 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:37:16 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.1f( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.1f( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.1e( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.1e( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.11( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.10( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.10( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.11( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.13( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.12( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.13( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.14( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.14( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.15( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.12( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.17( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.16( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.15( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.16( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.17( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.9( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.8( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.8( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.9( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.b( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.a( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.a( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.b( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.d( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.c( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.c( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.7( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.1( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.6( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.1( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.d( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.3( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.2( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.7( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.6( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.4( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.5( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.5( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.4( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.2( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.3( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.e( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.f( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.f( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.1c( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.e( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.1d( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.1d( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.1c( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.1a( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.1b( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.1b( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.1a( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.19( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.19( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.18( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.18( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.1f( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.1e( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.1e( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.11( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.10( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.11( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.10( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.13( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.1f( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.12( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.13( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.14( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.14( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.17( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.12( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.16( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.9( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.15( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.15( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.17( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.b( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.a( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.a( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.8( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.b( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.16( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.d( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.c( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.8( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.c( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.7( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.1( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.0( empty local-lis/les=29/30 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.0( empty local-lis/les=29/30 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.6( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.2( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.1( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.6( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.d( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.4( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.9( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.5( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.2( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.7( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.5( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.e( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.f( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.1c( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.1d( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.1d( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.f( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.1c( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.e( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.1a( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.1b( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.1a( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.19( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.19( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.3( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.1b( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.4( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.18( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[4.3( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 30 pg[5.18( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [0] r=0 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:16 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:37:16 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:37:17 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Dec 15 10:37:17 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Dec 15 10:37:17 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:37:17 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:37:17 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Dec 15 10:37:17 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 15 10:37:17 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e31 e31: 2 total, 2 up, 2 in
Dec 15 10:37:17 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e31: 2 total, 2 up, 2 in
Dec 15 10:37:17 compute-0 ceph-mon[74356]: 2.1b scrub starts
Dec 15 10:37:17 compute-0 ceph-mon[74356]: 2.1b scrub ok
Dec 15 10:37:17 compute-0 ceph-mon[74356]: 3.1b scrub ok
Dec 15 10:37:17 compute-0 ceph-mon[74356]: Updating compute-2:/etc/ceph/ceph.conf
Dec 15 10:37:17 compute-0 ceph-mon[74356]: pgmap v85: 131 pgs: 1 peering, 124 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:17 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 15 10:37:17 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 15 10:37:17 compute-0 ceph-mon[74356]: osdmap e30: 2 total, 2 up, 2 in
Dec 15 10:37:17 compute-0 ceph-mon[74356]: 2.1f scrub starts
Dec 15 10:37:17 compute-0 ceph-mon[74356]: 2.1f scrub ok
Dec 15 10:37:17 compute-0 ceph-mon[74356]: Updating compute-2:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:37:17 compute-0 ceph-mon[74356]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:37:18 compute-0 sudo[87335]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sylfeazodybtrpvjryfjwjnfwszdfycu ; /usr/bin/python3'
Dec 15 10:37:18 compute-0 sudo[87335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:18 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:37:18 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:37:18 compute-0 python3[87337]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.difmqj/server_addr 192.168.122.100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:37:18 compute-0 podman[87338]: 2025-12-15 10:37:18.280785293 +0000 UTC m=+0.050492673 container create fda615baa3e643df6aeab3b90f5eea846a40d90fbc8ef112a2b714303a76d62b (image=quay.io/ceph/ceph:v19, name=elegant_goldberg, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 15 10:37:18 compute-0 systemd[75682]: Starting Mark boot as successful...
Dec 15 10:37:18 compute-0 systemd[1]: Started libpod-conmon-fda615baa3e643df6aeab3b90f5eea846a40d90fbc8ef112a2b714303a76d62b.scope.
Dec 15 10:37:18 compute-0 systemd[75682]: Finished Mark boot as successful.
Dec 15 10:37:18 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de9d38b1479c084f8b8c98a945dc6569875b46c4a0068538ae451a9cda4e659f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de9d38b1479c084f8b8c98a945dc6569875b46c4a0068538ae451a9cda4e659f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de9d38b1479c084f8b8c98a945dc6569875b46c4a0068538ae451a9cda4e659f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:18 compute-0 podman[87338]: 2025-12-15 10:37:18.349333756 +0000 UTC m=+0.119041156 container init fda615baa3e643df6aeab3b90f5eea846a40d90fbc8ef112a2b714303a76d62b (image=quay.io/ceph/ceph:v19, name=elegant_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:37:18 compute-0 podman[87338]: 2025-12-15 10:37:18.255676282 +0000 UTC m=+0.025383692 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:37:18 compute-0 podman[87338]: 2025-12-15 10:37:18.356099257 +0000 UTC m=+0.125806637 container start fda615baa3e643df6aeab3b90f5eea846a40d90fbc8ef112a2b714303a76d62b (image=quay.io/ceph/ceph:v19, name=elegant_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Dec 15 10:37:18 compute-0 podman[87338]: 2025-12-15 10:37:18.360394517 +0000 UTC m=+0.130101928 container attach fda615baa3e643df6aeab3b90f5eea846a40d90fbc8ef112a2b714303a76d62b (image=quay.io/ceph/ceph:v19, name=elegant_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 15 10:37:18 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Dec 15 10:37:18 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Dec 15 10:37:18 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:37:18 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:18 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:37:18 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.difmqj/server_addr}] v 0)
Dec 15 10:37:18 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:18 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 15 10:37:18 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1651886367' entity='client.admin' 
Dec 15 10:37:18 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:18 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v88: 162 pgs: 1 peering, 124 unknown, 37 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:18 compute-0 ceph-mgr[74651]: [progress INFO root] update: starting ev 2cb2fe10-11e5-4e98-8616-d90d9b97440e (Updating mon deployment (+2 -> 3))
Dec 15 10:37:18 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Dec 15 10:37:18 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 15 10:37:18 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 15 10:37:18 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 15 10:37:18 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 15 10:37:18 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 15 10:37:18 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:37:18 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:37:18 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Dec 15 10:37:18 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Dec 15 10:37:18 compute-0 systemd[1]: libpod-fda615baa3e643df6aeab3b90f5eea846a40d90fbc8ef112a2b714303a76d62b.scope: Deactivated successfully.
Dec 15 10:37:18 compute-0 podman[87379]: 2025-12-15 10:37:18.816712948 +0000 UTC m=+0.023092756 container died fda615baa3e643df6aeab3b90f5eea846a40d90fbc8ef112a2b714303a76d62b (image=quay.io/ceph/ceph:v19, name=elegant_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 15 10:37:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-de9d38b1479c084f8b8c98a945dc6569875b46c4a0068538ae451a9cda4e659f-merged.mount: Deactivated successfully.
Dec 15 10:37:18 compute-0 podman[87379]: 2025-12-15 10:37:18.862820827 +0000 UTC m=+0.069200615 container remove fda615baa3e643df6aeab3b90f5eea846a40d90fbc8ef112a2b714303a76d62b (image=quay.io/ceph/ceph:v19, name=elegant_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1)
Dec 15 10:37:18 compute-0 systemd[1]: libpod-conmon-fda615baa3e643df6aeab3b90f5eea846a40d90fbc8ef112a2b714303a76d62b.scope: Deactivated successfully.
Dec 15 10:37:18 compute-0 sudo[87335]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:18 compute-0 ceph-mon[74356]: 3.1a scrub starts
Dec 15 10:37:18 compute-0 ceph-mon[74356]: 3.1a scrub ok
Dec 15 10:37:18 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 15 10:37:18 compute-0 ceph-mon[74356]: osdmap e31: 2 total, 2 up, 2 in
Dec 15 10:37:18 compute-0 ceph-mon[74356]: 2.1d deep-scrub starts
Dec 15 10:37:18 compute-0 ceph-mon[74356]: 2.1d deep-scrub ok
Dec 15 10:37:18 compute-0 ceph-mon[74356]: Updating compute-2:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:37:18 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:18 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:18 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/1651886367' entity='client.admin' 
Dec 15 10:37:18 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:18 compute-0 ceph-mon[74356]: pgmap v88: 162 pgs: 1 peering, 124 unknown, 37 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:18 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 15 10:37:18 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 15 10:37:18 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 15 10:37:18 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:37:18 compute-0 ceph-mon[74356]: Deploying daemon mon.compute-2 on compute-2
Dec 15 10:37:19 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Dec 15 10:37:19 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 31 pg[6.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=31 pruub=11.546103477s) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active pruub 50.672676086s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 31 pg[6.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=31 pruub=11.546103477s) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown pruub 50.672676086s@ mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 sudo[87417]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aitrmaecnwnuyldytlanljsrmkayhbis ; /usr/bin/python3'
Dec 15 10:37:19 compute-0 sudo[87417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:19 compute-0 python3[87419]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard//server_addr 192.168.122.101
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:37:19 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Dec 15 10:37:19 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Dec 15 10:37:19 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 15 10:37:19 compute-0 podman[87420]: 2025-12-15 10:37:19.769231375 +0000 UTC m=+0.063583401 container create f7f8813553b8e241f75d905d7c502437437e94188e536222771f0bec2acffea2 (image=quay.io/ceph/ceph:v19, name=gallant_solomon, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:37:19 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 15 10:37:19 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e32 e32: 2 total, 2 up, 2 in
Dec 15 10:37:19 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e32: 2 total, 2 up, 2 in
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.1a( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.1b( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.18( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.19( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.1e( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.1f( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.c( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.d( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.6( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.7( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.4( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.3( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.2( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.f( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.1( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.e( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.9( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.5( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.8( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.b( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.a( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.15( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.14( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.17( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.16( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.11( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.10( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.13( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.1d( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.1c( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.18( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.12( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.1f( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.19( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.1b( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.d( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.6( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.1e( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.c( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.7( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.4( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.1a( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.f( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.3( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.1( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.b( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.8( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.a( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.5( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.14( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.15( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.9( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.e( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.16( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.11( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.0( empty local-lis/les=31/32 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.2( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.10( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.1d( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.13( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.1c( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.12( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 32 pg[6.17( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:19 compute-0 systemd[1]: Started libpod-conmon-f7f8813553b8e241f75d905d7c502437437e94188e536222771f0bec2acffea2.scope.
Dec 15 10:37:19 compute-0 podman[87420]: 2025-12-15 10:37:19.731246723 +0000 UTC m=+0.025598759 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:37:19 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:37:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131245a3dda1d986eb23f871508e9bdf5f8c3b341799cd5260d8a611758747d3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131245a3dda1d986eb23f871508e9bdf5f8c3b341799cd5260d8a611758747d3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131245a3dda1d986eb23f871508e9bdf5f8c3b341799cd5260d8a611758747d3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:19 compute-0 podman[87420]: 2025-12-15 10:37:19.851354143 +0000 UTC m=+0.145706249 container init f7f8813553b8e241f75d905d7c502437437e94188e536222771f0bec2acffea2 (image=quay.io/ceph/ceph:v19, name=gallant_solomon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 15 10:37:19 compute-0 podman[87420]: 2025-12-15 10:37:19.857340409 +0000 UTC m=+0.151692425 container start f7f8813553b8e241f75d905d7c502437437e94188e536222771f0bec2acffea2 (image=quay.io/ceph/ceph:v19, name=gallant_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:37:19 compute-0 podman[87420]: 2025-12-15 10:37:19.861042779 +0000 UTC m=+0.155394835 container attach f7f8813553b8e241f75d905d7c502437437e94188e536222771f0bec2acffea2 (image=quay.io/ceph/ceph:v19, name=gallant_solomon, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:37:19 compute-0 ceph-mon[74356]: 3.1e scrub starts
Dec 15 10:37:19 compute-0 ceph-mon[74356]: 3.1e scrub ok
Dec 15 10:37:19 compute-0 ceph-mon[74356]: 2.a scrub starts
Dec 15 10:37:19 compute-0 ceph-mon[74356]: 2.a scrub ok
Dec 15 10:37:19 compute-0 ceph-mon[74356]: 3.1d scrub starts
Dec 15 10:37:19 compute-0 ceph-mon[74356]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Dec 15 10:37:19 compute-0 ceph-mon[74356]: Cluster is now healthy
Dec 15 10:37:19 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 15 10:37:19 compute-0 ceph-mon[74356]: osdmap e32: 2 total, 2 up, 2 in
Dec 15 10:37:20 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/dashboard//server_addr}] v 0)
Dec 15 10:37:20 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/846327388' entity='client.admin' 
Dec 15 10:37:20 compute-0 systemd[1]: libpod-f7f8813553b8e241f75d905d7c502437437e94188e536222771f0bec2acffea2.scope: Deactivated successfully.
Dec 15 10:37:20 compute-0 podman[87420]: 2025-12-15 10:37:20.270809087 +0000 UTC m=+0.565161103 container died f7f8813553b8e241f75d905d7c502437437e94188e536222771f0bec2acffea2 (image=quay.io/ceph/ceph:v19, name=gallant_solomon, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:37:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-131245a3dda1d986eb23f871508e9bdf5f8c3b341799cd5260d8a611758747d3-merged.mount: Deactivated successfully.
Dec 15 10:37:20 compute-0 podman[87420]: 2025-12-15 10:37:20.30634295 +0000 UTC m=+0.600694966 container remove f7f8813553b8e241f75d905d7c502437437e94188e536222771f0bec2acffea2 (image=quay.io/ceph/ceph:v19, name=gallant_solomon, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:37:20 compute-0 systemd[1]: libpod-conmon-f7f8813553b8e241f75d905d7c502437437e94188e536222771f0bec2acffea2.scope: Deactivated successfully.
Dec 15 10:37:20 compute-0 sudo[87417]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:20 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Dec 15 10:37:20 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Dec 15 10:37:20 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v90: 193 pgs: 62 unknown, 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:20 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Dec 15 10:37:20 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e33 e33: 2 total, 2 up, 2 in
Dec 15 10:37:20 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e33: 2 total, 2 up, 2 in
Dec 15 10:37:20 compute-0 ceph-mgr[74651]: [progress INFO root] Writing back 8 completed events
Dec 15 10:37:20 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 15 10:37:20 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:21 compute-0 sudo[87495]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxxisarhjjtmihxvjgjabwnitqabfviq ; /usr/bin/python3'
Dec 15 10:37:21 compute-0 sudo[87495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:37:21 compute-0 ceph-mon[74356]: 3.1d scrub ok
Dec 15 10:37:21 compute-0 ceph-mon[74356]: 2.7 scrub starts
Dec 15 10:37:21 compute-0 ceph-mon[74356]: 2.7 scrub ok
Dec 15 10:37:21 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/846327388' entity='client.admin' 
Dec 15 10:37:21 compute-0 ceph-mon[74356]: pgmap v90: 193 pgs: 62 unknown, 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:21 compute-0 ceph-mon[74356]: osdmap e33: 2 total, 2 up, 2 in
Dec 15 10:37:21 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:21 compute-0 python3[87497]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard//server_addr 192.168.122.102
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:37:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:37:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec 15 10:37:21 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec 15 10:37:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:37:21 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 15 10:37:21 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 15 10:37:21 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 15 10:37:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 15 10:37:21 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 15 10:37:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:37:21 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:37:21 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Dec 15 10:37:21 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Dec 15 10:37:21 compute-0 podman[87498]: 2025-12-15 10:37:21.324212295 +0000 UTC m=+0.041763767 container create bbb455712bdffa721ddf817ec764be3c185824fc733bbf138b0785484082f851 (image=quay.io/ceph/ceph:v19, name=vibrant_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Dec 15 10:37:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec 15 10:37:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Dec 15 10:37:21 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/931530778; not ready for session (expect reconnect)
Dec 15 10:37:21 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.9 deep-scrub starts
Dec 15 10:37:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 15 10:37:21 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 15 10:37:21 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Dec 15 10:37:21 compute-0 ceph-mon[74356]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 15 10:37:21 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 15 10:37:21 compute-0 ceph-mon[74356]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 15 10:37:21 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 15 10:37:21 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 15 10:37:21 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 15 10:37:21 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.9 deep-scrub ok
Dec 15 10:37:21 compute-0 ceph-mon[74356]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Dec 15 10:37:21 compute-0 systemd[1]: Started libpod-conmon-bbb455712bdffa721ddf817ec764be3c185824fc733bbf138b0785484082f851.scope.
Dec 15 10:37:21 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 15 10:37:21 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d7d4117a8777d4229c2860fc434cde70cc5f84b657c64465a006edd73f0a684/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d7d4117a8777d4229c2860fc434cde70cc5f84b657c64465a006edd73f0a684/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d7d4117a8777d4229c2860fc434cde70cc5f84b657c64465a006edd73f0a684/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:21 compute-0 podman[87498]: 2025-12-15 10:37:21.403348344 +0000 UTC m=+0.120899836 container init bbb455712bdffa721ddf817ec764be3c185824fc733bbf138b0785484082f851 (image=quay.io/ceph/ceph:v19, name=vibrant_einstein, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Dec 15 10:37:21 compute-0 podman[87498]: 2025-12-15 10:37:21.309588377 +0000 UTC m=+0.027139879 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:37:21 compute-0 podman[87498]: 2025-12-15 10:37:21.411173831 +0000 UTC m=+0.128725303 container start bbb455712bdffa721ddf817ec764be3c185824fc733bbf138b0785484082f851 (image=quay.io/ceph/ceph:v19, name=vibrant_einstein, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:37:21 compute-0 podman[87498]: 2025-12-15 10:37:21.414120697 +0000 UTC m=+0.131672199 container attach bbb455712bdffa721ddf817ec764be3c185824fc733bbf138b0785484082f851 (image=quay.io/ceph/ceph:v19, name=vibrant_einstein, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 15 10:37:21 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 15 10:37:21 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 15 10:37:22 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 15 10:37:22 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/931530778; not ready for session (expect reconnect)
Dec 15 10:37:22 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 15 10:37:22 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 15 10:37:22 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 15 10:37:22 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Dec 15 10:37:22 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Dec 15 10:37:22 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v92: 193 pgs: 31 unknown, 162 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:22 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 15 10:37:23 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:37:23 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 15 10:37:23 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 15 10:37:23 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 15 10:37:23 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/830349144; not ready for session (expect reconnect)
Dec 15 10:37:23 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 15 10:37:23 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:37:23 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 15 10:37:23 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/931530778; not ready for session (expect reconnect)
Dec 15 10:37:23 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 15 10:37:23 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 15 10:37:23 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 15 10:37:23 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec 15 10:37:23 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Dec 15 10:37:24 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/830349144; not ready for session (expect reconnect)
Dec 15 10:37:24 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 15 10:37:24 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:37:24 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 15 10:37:24 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/931530778; not ready for session (expect reconnect)
Dec 15 10:37:24 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 15 10:37:24 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 15 10:37:24 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 15 10:37:24 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Dec 15 10:37:24 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Dec 15 10:37:24 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 15 10:37:24 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 15 10:37:24 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v93: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:24 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 15 10:37:24 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 15 10:37:24 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 15 10:37:24 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 15 10:37:24 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 15 10:37:24 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 15 10:37:24 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 15 10:37:24 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 15 10:37:24 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 15 10:37:24 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 15 10:37:24 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 15 10:37:24 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 15 10:37:25 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 15 10:37:25 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 15 10:37:25 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/830349144; not ready for session (expect reconnect)
Dec 15 10:37:25 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 15 10:37:25 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:37:25 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 15 10:37:25 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.6 deep-scrub starts
Dec 15 10:37:25 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.6 deep-scrub ok
Dec 15 10:37:25 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/931530778; not ready for session (expect reconnect)
Dec 15 10:37:25 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 15 10:37:25 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 15 10:37:25 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 15 10:37:25 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event 3e226374-538b-4850-ba35-3585f0803481 (Global Recovery Event) in 10 seconds
Dec 15 10:37:25 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 15 10:37:26 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/830349144; not ready for session (expect reconnect)
Dec 15 10:37:26 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:37:26 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 15 10:37:26 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Dec 15 10:37:26 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Dec 15 10:37:26 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/931530778; not ready for session (expect reconnect)
Dec 15 10:37:26 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 15 10:37:26 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 15 10:37:26 compute-0 ceph-mon[74356]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Dec 15 10:37:26 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : monmap epoch 2
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : last_changed 2025-12-15T10:37:21.342621+0000
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : created 2025-12-15T10:34:45.470940+0000
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 15 10:37:26 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : fsmap 
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e33: 2 total, 2 up, 2 in
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.difmqj(active, since 2m)
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:26 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:26 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:26 compute-0 ceph-mgr[74651]: [progress INFO root] complete: finished ev 2cb2fe10-11e5-4e98-8616-d90d9b97440e (Updating mon deployment (+2 -> 3))
Dec 15 10:37:26 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event 2cb2fe10-11e5-4e98-8616-d90d9b97440e (Updating mon deployment (+2 -> 3)) in 8 seconds
Dec 15 10:37:26 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:26 compute-0 ceph-mgr[74651]: [progress INFO root] update: starting ev 3c9f9c6e-894a-487f-99d9-cda93fe13af6 (Updating mgr deployment (+2 -> 3))
Dec 15 10:37:26 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.gxhwsu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gxhwsu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gxhwsu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 15 10:37:26 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Dec 15 10:37:26 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 15 10:37:26 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:37:26 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.gxhwsu on compute-2
Dec 15 10:37:26 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.gxhwsu on compute-2
Dec 15 10:37:26 compute-0 ceph-mon[74356]: 2.8 scrub starts
Dec 15 10:37:26 compute-0 ceph-mon[74356]: 2.8 scrub ok
Dec 15 10:37:26 compute-0 ceph-mon[74356]: Deploying daemon mon.compute-1 on compute-1
Dec 15 10:37:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 15 10:37:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 15 10:37:26 compute-0 ceph-mon[74356]: 3.9 deep-scrub ok
Dec 15 10:37:26 compute-0 ceph-mon[74356]: mon.compute-0 calling monitor election
Dec 15 10:37:26 compute-0 ceph-mon[74356]: 2.4 deep-scrub starts
Dec 15 10:37:26 compute-0 ceph-mon[74356]: 2.4 deep-scrub ok
Dec 15 10:37:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 15 10:37:26 compute-0 ceph-mon[74356]: 3.5 scrub starts
Dec 15 10:37:26 compute-0 ceph-mon[74356]: 3.5 scrub ok
Dec 15 10:37:26 compute-0 ceph-mon[74356]: pgmap v92: 193 pgs: 31 unknown, 162 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:26 compute-0 ceph-mon[74356]: 2.9 scrub starts
Dec 15 10:37:26 compute-0 ceph-mon[74356]: 2.9 scrub ok
Dec 15 10:37:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:37:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 15 10:37:26 compute-0 ceph-mon[74356]: mon.compute-2 calling monitor election
Dec 15 10:37:26 compute-0 ceph-mon[74356]: 3.3 scrub starts
Dec 15 10:37:26 compute-0 ceph-mon[74356]: 3.3 scrub ok
Dec 15 10:37:26 compute-0 ceph-mon[74356]: 2.6 deep-scrub starts
Dec 15 10:37:26 compute-0 ceph-mon[74356]: 2.6 deep-scrub ok
Dec 15 10:37:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:37:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 15 10:37:26 compute-0 ceph-mon[74356]: 3.1 scrub starts
Dec 15 10:37:26 compute-0 ceph-mon[74356]: 3.1 scrub ok
Dec 15 10:37:26 compute-0 ceph-mon[74356]: pgmap v93: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 15 10:37:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 15 10:37:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 15 10:37:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 15 10:37:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 15 10:37:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 15 10:37:26 compute-0 ceph-mon[74356]: 2.1 scrub starts
Dec 15 10:37:26 compute-0 ceph-mon[74356]: 2.1 scrub ok
Dec 15 10:37:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:37:26 compute-0 ceph-mon[74356]: 3.6 deep-scrub starts
Dec 15 10:37:26 compute-0 ceph-mon[74356]: 3.6 deep-scrub ok
Dec 15 10:37:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 15 10:37:26 compute-0 ceph-mon[74356]: 2.0 scrub starts
Dec 15 10:37:26 compute-0 ceph-mon[74356]: 2.0 scrub ok
Dec 15 10:37:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:37:26 compute-0 ceph-mon[74356]: 3.4 scrub starts
Dec 15 10:37:26 compute-0 ceph-mon[74356]: 3.4 scrub ok
Dec 15 10:37:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 15 10:37:26 compute-0 ceph-mon[74356]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 15 10:37:26 compute-0 ceph-mon[74356]: monmap epoch 2
Dec 15 10:37:26 compute-0 ceph-mon[74356]: fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:37:26 compute-0 ceph-mon[74356]: last_changed 2025-12-15T10:37:21.342621+0000
Dec 15 10:37:26 compute-0 ceph-mon[74356]: created 2025-12-15T10:34:45.470940+0000
Dec 15 10:37:26 compute-0 ceph-mon[74356]: min_mon_release 19 (squid)
Dec 15 10:37:26 compute-0 ceph-mon[74356]: election_strategy: 1
Dec 15 10:37:26 compute-0 ceph-mon[74356]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 15 10:37:26 compute-0 ceph-mon[74356]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 15 10:37:26 compute-0 ceph-mon[74356]: fsmap 
Dec 15 10:37:26 compute-0 ceph-mon[74356]: osdmap e33: 2 total, 2 up, 2 in
Dec 15 10:37:26 compute-0 ceph-mon[74356]: mgrmap e9: compute-0.difmqj(active, since 2m)
Dec 15 10:37:26 compute-0 ceph-mon[74356]: overall HEALTH_OK
Dec 15 10:37:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gxhwsu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 15 10:37:26 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gxhwsu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 15 10:37:26 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e34 e34: 2 total, 2 up, 2 in
Dec 15 10:37:26 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e34: 2 total, 2 up, 2 in
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.1a( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.314043999s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 55.463890076s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.18( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.462427139s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.612293243s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.1d( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.310546875s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active pruub 58.460426331s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.1a( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.314000130s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.463890076s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.1d( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.310521126s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.460426331s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.18( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.462394714s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.612293243s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.1a( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.459579468s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.609580994s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.18( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.462248802s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.612270355s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.1a( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.459561348s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.609580994s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.18( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.462237358s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.612270355s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.1b( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.459473610s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.609580994s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.1b( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.459457397s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.609580994s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.1c( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.310257912s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active pruub 58.460430145s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.1c( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.310239792s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.460430145s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.1b( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.458582878s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.609561920s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.1b( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.458566666s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.609561920s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.1a( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.458518982s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.609535217s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.1a( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.458500862s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.609535217s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.19( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.312273026s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 55.463333130s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.19( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.312256813s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.463333130s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.1a( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.309217453s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active pruub 58.460441589s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.1a( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.309203148s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.460441589s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.e( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.458256721s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.609516144s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.1c( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.458201408s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.609470367s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.1c( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.458186150s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.609470367s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.e( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.458242416s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.609516144s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.9( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.312149048s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active pruub 58.463542938s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.9( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.312130928s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.463542938s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.f( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.457995415s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.609455109s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.f( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.457983017s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.609455109s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.1e( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.312585831s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 55.463798523s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.1e( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.312282562s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.463798523s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.e( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.457863808s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.609436035s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.e( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.457848549s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.609436035s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.2( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.457797050s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.609462738s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.2( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.457782745s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.609462738s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.3( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.312049866s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active pruub 58.463745117s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.d( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.311995506s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 55.463695526s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.3( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.312030792s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.463745117s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.7( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.312200546s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 55.463993073s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.d( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.311979294s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.463695526s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.7( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.312187195s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.463993073s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.5( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.457551956s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.609443665s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.5( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.457509995s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.609443665s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.7( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.457456589s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.609420776s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.5( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.311643600s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active pruub 58.463630676s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.5( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.311626434s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.463630676s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.7( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.457442284s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.609420776s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.1( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.457048416s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.609138489s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.3( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.312018394s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 55.464179993s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.1( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.456982613s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.609138489s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.3( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.312005043s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.464179993s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.1( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.456642151s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.608898163s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.2( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.312001228s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 55.464286804s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.1( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.456627846s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.608898163s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.2( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.311988831s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.464286804s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.4( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.456793785s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.609176636s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.4( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.456775665s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.609176636s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.5( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.312298775s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 55.464733124s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.5( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.312280655s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.464733124s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.a( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.312035561s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active pruub 58.464561462s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.d( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.456617355s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.609169006s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.a( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.312022209s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.464561462s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.d( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.456604004s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.609169006s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.c( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.456125259s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.608749390s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.c( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.456108093s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.608749390s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.c( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.311961174s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active pruub 58.464748383s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.e( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.312023163s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 55.464832306s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.a( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.455841064s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.608657837s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.c( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.311922073s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.464748383s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.e( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.312005997s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.464832306s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.a( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.455827713s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.608657837s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.8( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.311697960s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 55.464687347s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.9( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.456398010s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.609405518s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.8( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.311683655s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.464687347s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.9( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.456383705s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.609405518s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.e( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.311702728s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active pruub 58.464744568s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.e( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.311683655s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.464744568s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.9( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.455438614s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.608604431s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.8( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.455577850s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.608753204s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.9( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.455426216s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.608604431s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.8( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.455556870s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.608753204s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.f( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.311335564s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active pruub 58.464588165s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.f( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.311321259s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.464588165s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.10( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.311153412s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active pruub 58.464626312s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.16( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.455098152s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.608585358s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.10( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.311135292s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.464626312s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.15( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.311284065s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 55.464782715s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.16( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.455080032s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.608585358s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.15( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.311266899s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.464782715s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.11( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.311125755s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active pruub 58.464767456s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.11( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.311109543s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.464767456s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.15( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.454857826s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.608535767s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.15( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.454842567s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.608535767s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.17( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.315122604s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 55.468917847s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.13( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.311057091s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active pruub 58.464866638s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.17( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.315104485s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.468917847s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.15( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.454722404s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.608547211s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.13( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.311041832s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.464866638s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.15( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.454708099s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.608547211s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.13( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.454342842s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.608303070s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.14( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.310853004s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active pruub 58.464813232s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.13( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.454327583s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.608303070s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.14( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.310838699s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.464813232s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.a( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.311512947s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 55.464714050s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.15( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.310718536s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active pruub 58.464847565s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.10( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.453803062s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.607959747s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.15( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.310701370s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.464847565s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.a( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.310567856s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.464714050s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.16( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.310694695s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active pruub 58.464866638s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.10( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.453785896s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.607959747s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.16( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.310677528s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.464866638s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.12( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.314231873s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 55.468563080s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.11( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.453584671s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.607921600s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.12( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.314198494s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.468563080s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.11( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.453553200s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.607921600s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.1f( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.448154449s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.602561951s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[4.1f( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.448139191s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.602561951s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.1f( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.453718185s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 60.608261108s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.1c( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.313977242s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 55.468532562s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[5.1f( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.453698158s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.608261108s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[6.1c( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=34 pruub=9.313961029s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.468532562s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.d( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.309664726s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active pruub 58.464576721s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[3.d( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=12.309642792s) [1] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.464576721s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[7.1d( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[2.19( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[7.13( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[2.15( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[7.10( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[2.10( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[2.13( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[2.e( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[7.b( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[2.d( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[7.a( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[7.14( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[2.c( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[7.8( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[7.e( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[7.9( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[2.1( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[7.4( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[7.3( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[2.6( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[7.6( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[2.4( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[7.2( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[2.9( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[2.a( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[7.1e( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[7.f( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[2.1b( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[7.18( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[2.1e( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[7.1b( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 34 pg[2.1f( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:37:26 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v95: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 15 10:37:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Dec 15 10:37:27 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/830349144; not ready for session (expect reconnect)
Dec 15 10:37:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 15 10:37:27 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:37:27 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 15 10:37:27 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Dec 15 10:37:27 compute-0 ceph-mon[74356]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 15 10:37:27 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 15 10:37:27 compute-0 ceph-mon[74356]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 15 10:37:27 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:37:27 compute-0 ceph-mon[74356]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 15 10:37:27 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 15 10:37:27 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 15 10:37:27 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 15 10:37:27 compute-0 ceph-mon[74356]: paxos.0).electionLogic(10) init, last seen epoch 10
Dec 15 10:37:27 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 15 10:37:27 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Dec 15 10:37:27 compute-0 ceph-mgr[74651]: mgr.server handle_report got status from non-daemon mon.compute-2
Dec 15 10:37:27 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:27.346+0000 7fc9fa836640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Dec 15 10:37:27 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 15 10:37:27 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 15 10:37:27 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:37:27 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 15 10:37:28 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 15 10:37:28 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 15 10:37:28 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/830349144; not ready for session (expect reconnect)
Dec 15 10:37:28 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 15 10:37:28 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:37:28 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 15 10:37:28 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.19 deep-scrub starts
Dec 15 10:37:28 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.19 deep-scrub ok
Dec 15 10:37:28 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 15 10:37:28 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v96: 193 pgs: 62 peering, 131 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:28 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 15 10:37:29 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/830349144; not ready for session (expect reconnect)
Dec 15 10:37:29 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 15 10:37:29 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:37:29 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 15 10:37:29 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Dec 15 10:37:29 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Dec 15 10:37:29 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 15 10:37:30 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/830349144; not ready for session (expect reconnect)
Dec 15 10:37:30 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 15 10:37:30 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:37:30 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 15 10:37:30 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Dec 15 10:37:30 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Dec 15 10:37:30 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 15 10:37:30 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 15 10:37:30 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v97: 193 pgs: 94 peering, 99 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:30 compute-0 ceph-mgr[74651]: [progress INFO root] Writing back 10 completed events
Dec 15 10:37:30 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 15 10:37:30 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 15 10:37:31 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 15 10:37:31 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 15 10:37:31 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/830349144; not ready for session (expect reconnect)
Dec 15 10:37:31 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 15 10:37:31 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:37:31 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 15 10:37:31 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Dec 15 10:37:31 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Dec 15 10:37:31 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 15 10:37:31 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 15 10:37:32 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/830349144; not ready for session (expect reconnect)
Dec 15 10:37:32 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 15 10:37:32 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:37:32 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 15 10:37:32 compute-0 ceph-mon[74356]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Dec 15 10:37:32 compute-0 ceph-mon[74356]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 15 10:37:32 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 15 10:37:32 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : monmap epoch 3
Dec 15 10:37:32 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:37:32 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : last_changed 2025-12-15T10:37:27.242658+0000
Dec 15 10:37:32 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : created 2025-12-15T10:34:45.470940+0000
Dec 15 10:37:32 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec 15 10:37:32 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 15 10:37:32 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 15 10:37:32 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 15 10:37:32 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Dec 15 10:37:32 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 15 10:37:32 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : fsmap 
Dec 15 10:37:32 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e34: 2 total, 2 up, 2 in
Dec 15 10:37:32 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.difmqj(active, since 2m)
Dec 15 10:37:32 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 15 10:37:32 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Dec 15 10:37:32 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Dec 15 10:37:32 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Dec 15 10:37:32 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:32 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:37:32 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e35 e35: 2 total, 2 up, 2 in
Dec 15 10:37:32 compute-0 ceph-mon[74356]: 3.1f scrub starts
Dec 15 10:37:32 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 15 10:37:32 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:37:32 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 15 10:37:32 compute-0 ceph-mon[74356]: mon.compute-0 calling monitor election
Dec 15 10:37:32 compute-0 ceph-mon[74356]: mon.compute-2 calling monitor election
Dec 15 10:37:32 compute-0 ceph-mon[74356]: 3.1f scrub ok
Dec 15 10:37:32 compute-0 ceph-mon[74356]: 7.1f scrub starts
Dec 15 10:37:32 compute-0 ceph-mon[74356]: 7.1f scrub ok
Dec 15 10:37:32 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:37:32 compute-0 ceph-mon[74356]: 5.19 deep-scrub starts
Dec 15 10:37:32 compute-0 ceph-mon[74356]: 5.19 deep-scrub ok
Dec 15 10:37:32 compute-0 ceph-mon[74356]: pgmap v96: 193 pgs: 62 peering, 131 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:32 compute-0 ceph-mon[74356]: 7.1c scrub starts
Dec 15 10:37:32 compute-0 ceph-mon[74356]: 7.1c scrub ok
Dec 15 10:37:32 compute-0 ceph-mon[74356]: mon.compute-1 calling monitor election
Dec 15 10:37:32 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:37:32 compute-0 ceph-mon[74356]: 4.19 scrub starts
Dec 15 10:37:32 compute-0 ceph-mon[74356]: 4.19 scrub ok
Dec 15 10:37:32 compute-0 ceph-mon[74356]: 2.18 scrub starts
Dec 15 10:37:32 compute-0 ceph-mon[74356]: 2.18 scrub ok
Dec 15 10:37:32 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:37:32 compute-0 ceph-mon[74356]: 6.1b scrub starts
Dec 15 10:37:32 compute-0 ceph-mon[74356]: 6.1b scrub ok
Dec 15 10:37:32 compute-0 ceph-mon[74356]: pgmap v97: 193 pgs: 94 peering, 99 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:32 compute-0 ceph-mon[74356]: 2.17 scrub starts
Dec 15 10:37:32 compute-0 ceph-mon[74356]: 2.17 scrub ok
Dec 15 10:37:32 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:37:32 compute-0 ceph-mon[74356]: 6.18 scrub starts
Dec 15 10:37:32 compute-0 ceph-mon[74356]: 6.18 scrub ok
Dec 15 10:37:32 compute-0 ceph-mon[74356]: 7.12 deep-scrub starts
Dec 15 10:37:32 compute-0 ceph-mon[74356]: 7.12 deep-scrub ok
Dec 15 10:37:32 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:37:32 compute-0 ceph-mon[74356]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 15 10:37:32 compute-0 ceph-mon[74356]: monmap epoch 3
Dec 15 10:37:32 compute-0 ceph-mon[74356]: fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:37:32 compute-0 ceph-mon[74356]: last_changed 2025-12-15T10:37:27.242658+0000
Dec 15 10:37:32 compute-0 ceph-mon[74356]: created 2025-12-15T10:34:45.470940+0000
Dec 15 10:37:32 compute-0 ceph-mon[74356]: min_mon_release 19 (squid)
Dec 15 10:37:32 compute-0 ceph-mon[74356]: election_strategy: 1
Dec 15 10:37:32 compute-0 ceph-mon[74356]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 15 10:37:32 compute-0 ceph-mon[74356]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 15 10:37:32 compute-0 ceph-mon[74356]: 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Dec 15 10:37:32 compute-0 ceph-mon[74356]: fsmap 
Dec 15 10:37:32 compute-0 ceph-mon[74356]: osdmap e34: 2 total, 2 up, 2 in
Dec 15 10:37:32 compute-0 ceph-mon[74356]: mgrmap e9: compute-0.difmqj(active, since 2m)
Dec 15 10:37:32 compute-0 ceph-mon[74356]: overall HEALTH_OK
Dec 15 10:37:32 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:32 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e35: 2 total, 2 up, 2 in
Dec 15 10:37:32 compute-0 ceph-mgr[74651]: [progress WARNING root] Starting Global Recovery Event,94 pgs not in active + clean state
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[7.1d( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:32 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[2.13( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[2.15( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[2.10( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[7.14( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[2.19( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[7.a( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[2.e( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[7.13( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[7.b( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[2.d( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[2.a( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[7.f( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[7.8( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[7.9( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[7.e( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[7.10( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[2.c( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[2.6( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[2.1( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[7.2( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[7.4( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[2.4( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[7.6( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[2.1b( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[2.9( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[7.18( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[7.1e( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[2.1f( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[7.1b( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[2.1e( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 35 pg[7.3( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:37:32 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:32 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.tlqguq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 15 10:37:32 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.tlqguq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 15 10:37:32 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.tlqguq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 15 10:37:32 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 15 10:37:32 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 15 10:37:32 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:37:32 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:37:32 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.tlqguq on compute-1
Dec 15 10:37:32 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.tlqguq on compute-1
Dec 15 10:37:32 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v99: 193 pgs: 94 peering, 99 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:33 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/830349144; not ready for session (expect reconnect)
Dec 15 10:37:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 15 10:37:33 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:37:33 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Dec 15 10:37:33 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Dec 15 10:37:33 compute-0 ceph-mon[74356]: 6.1f scrub starts
Dec 15 10:37:33 compute-0 ceph-mon[74356]: 6.1f scrub ok
Dec 15 10:37:33 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:33 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:33 compute-0 ceph-mon[74356]: osdmap e35: 2 total, 2 up, 2 in
Dec 15 10:37:33 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:33 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:33 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.tlqguq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 15 10:37:33 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.tlqguq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 15 10:37:33 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 15 10:37:33 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:37:33 compute-0 ceph-mon[74356]: Deploying daemon mgr.compute-1.tlqguq on compute-1
Dec 15 10:37:33 compute-0 ceph-mon[74356]: pgmap v99: 193 pgs: 94 peering, 99 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:33 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:37:33 compute-0 ceph-mon[74356]: 4.1c scrub starts
Dec 15 10:37:33 compute-0 ceph-mon[74356]: 4.1c scrub ok
Dec 15 10:37:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard//server_addr}] v 0)
Dec 15 10:37:33 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1695517241' entity='client.admin' 
Dec 15 10:37:33 compute-0 systemd[1]: libpod-bbb455712bdffa721ddf817ec764be3c185824fc733bbf138b0785484082f851.scope: Deactivated successfully.
Dec 15 10:37:33 compute-0 podman[87498]: 2025-12-15 10:37:33.783825189 +0000 UTC m=+12.501376661 container died bbb455712bdffa721ddf817ec764be3c185824fc733bbf138b0785484082f851 (image=quay.io/ceph/ceph:v19, name=vibrant_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:37:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d7d4117a8777d4229c2860fc434cde70cc5f84b657c64465a006edd73f0a684-merged.mount: Deactivated successfully.
Dec 15 10:37:33 compute-0 podman[87498]: 2025-12-15 10:37:33.819338332 +0000 UTC m=+12.536889804 container remove bbb455712bdffa721ddf817ec764be3c185824fc733bbf138b0785484082f851 (image=quay.io/ceph/ceph:v19, name=vibrant_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:37:33 compute-0 systemd[1]: libpod-conmon-bbb455712bdffa721ddf817ec764be3c185824fc733bbf138b0785484082f851.scope: Deactivated successfully.
Dec 15 10:37:33 compute-0 sudo[87495]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:34 compute-0 sudo[87571]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fccwmpsyfgujfncdmapbplxpkufvxpxs ; /usr/bin/python3'
Dec 15 10:37:34 compute-0 sudo[87571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:34 compute-0 python3[87573]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:37:34 compute-0 podman[87574]: 2025-12-15 10:37:34.237259246 +0000 UTC m=+0.047208836 container create 7530c1c75307fbea26afbb61e63235a65dce17c523513acd79cac22a8936cc50 (image=quay.io/ceph/ceph:v19, name=distracted_curran, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 15 10:37:34 compute-0 ceph-mgr[74651]: mgr.server handle_report got status from non-daemon mon.compute-1
Dec 15 10:37:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:34.245+0000 7fc9fa836640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Dec 15 10:37:34 compute-0 systemd[1]: Started libpod-conmon-7530c1c75307fbea26afbb61e63235a65dce17c523513acd79cac22a8936cc50.scope.
Dec 15 10:37:34 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:37:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4af665dec4854592cdd8f10707ff6a4d9f9ccbb5d8b14c68c2f5caa156e7e79/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4af665dec4854592cdd8f10707ff6a4d9f9ccbb5d8b14c68c2f5caa156e7e79/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4af665dec4854592cdd8f10707ff6a4d9f9ccbb5d8b14c68c2f5caa156e7e79/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:34 compute-0 podman[87574]: 2025-12-15 10:37:34.218165471 +0000 UTC m=+0.028115071 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:37:34 compute-0 podman[87574]: 2025-12-15 10:37:34.323032203 +0000 UTC m=+0.132981853 container init 7530c1c75307fbea26afbb61e63235a65dce17c523513acd79cac22a8936cc50 (image=quay.io/ceph/ceph:v19, name=distracted_curran, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:37:34 compute-0 podman[87574]: 2025-12-15 10:37:34.328547203 +0000 UTC m=+0.138496793 container start 7530c1c75307fbea26afbb61e63235a65dce17c523513acd79cac22a8936cc50 (image=quay.io/ceph/ceph:v19, name=distracted_curran, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:37:34 compute-0 podman[87574]: 2025-12-15 10:37:34.332109879 +0000 UTC m=+0.142059519 container attach 7530c1c75307fbea26afbb61e63235a65dce17c523513acd79cac22a8936cc50 (image=quay.io/ceph/ceph:v19, name=distracted_curran, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 15 10:37:34 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.1d deep-scrub starts
Dec 15 10:37:34 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.1d deep-scrub ok
Dec 15 10:37:34 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:37:34 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:34 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:37:34 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:34 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 15 10:37:34 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:34 compute-0 ceph-mgr[74651]: [progress INFO root] complete: finished ev 3c9f9c6e-894a-487f-99d9-cda93fe13af6 (Updating mgr deployment (+2 -> 3))
Dec 15 10:37:34 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event 3c9f9c6e-894a-487f-99d9-cda93fe13af6 (Updating mgr deployment (+2 -> 3)) in 8 seconds
Dec 15 10:37:34 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 15 10:37:34 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:34 compute-0 ceph-mgr[74651]: [progress INFO root] update: starting ev 2be18237-e6f3-4639-8c29-51d7a4ccbb29 (Updating crash deployment (+1 -> 3))
Dec 15 10:37:34 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 15 10:37:34 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 15 10:37:34 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 15 10:37:34 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:37:34 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:37:34 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Dec 15 10:37:34 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Dec 15 10:37:34 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Dec 15 10:37:34 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/363509824' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 15 10:37:34 compute-0 ceph-mon[74356]: 2.16 scrub starts
Dec 15 10:37:34 compute-0 ceph-mon[74356]: 2.16 scrub ok
Dec 15 10:37:34 compute-0 ceph-mon[74356]: 2.14 scrub starts
Dec 15 10:37:34 compute-0 ceph-mon[74356]: 2.14 scrub ok
Dec 15 10:37:34 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/1695517241' entity='client.admin' 
Dec 15 10:37:34 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:34 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:34 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:34 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:34 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 15 10:37:34 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 15 10:37:34 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:37:34 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v100: 193 pgs: 62 peering, 131 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:35 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Dec 15 10:37:35 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Dec 15 10:37:35 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/363509824' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec 15 10:37:35 compute-0 distracted_curran[87589]: module 'dashboard' is already disabled
Dec 15 10:37:35 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.difmqj(active, since 2m)
Dec 15 10:37:35 compute-0 systemd[1]: libpod-7530c1c75307fbea26afbb61e63235a65dce17c523513acd79cac22a8936cc50.scope: Deactivated successfully.
Dec 15 10:37:35 compute-0 podman[87574]: 2025-12-15 10:37:35.557315629 +0000 UTC m=+1.367265209 container died 7530c1c75307fbea26afbb61e63235a65dce17c523513acd79cac22a8936cc50 (image=quay.io/ceph/ceph:v19, name=distracted_curran, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 15 10:37:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4af665dec4854592cdd8f10707ff6a4d9f9ccbb5d8b14c68c2f5caa156e7e79-merged.mount: Deactivated successfully.
Dec 15 10:37:35 compute-0 podman[87574]: 2025-12-15 10:37:35.599658644 +0000 UTC m=+1.409608234 container remove 7530c1c75307fbea26afbb61e63235a65dce17c523513acd79cac22a8936cc50 (image=quay.io/ceph/ceph:v19, name=distracted_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 15 10:37:35 compute-0 systemd[1]: libpod-conmon-7530c1c75307fbea26afbb61e63235a65dce17c523513acd79cac22a8936cc50.scope: Deactivated successfully.
Dec 15 10:37:35 compute-0 sudo[87571]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:35 compute-0 sudo[87648]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrfhxzeiqhkpxiomoiefhckbiedktgle ; /usr/bin/python3'
Dec 15 10:37:35 compute-0 sudo[87648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:35 compute-0 ceph-mon[74356]: 5.1d deep-scrub starts
Dec 15 10:37:35 compute-0 ceph-mon[74356]: 5.1d deep-scrub ok
Dec 15 10:37:35 compute-0 ceph-mon[74356]: Deploying daemon crash.compute-2 on compute-2
Dec 15 10:37:35 compute-0 ceph-mon[74356]: 7.11 scrub starts
Dec 15 10:37:35 compute-0 ceph-mon[74356]: 7.11 scrub ok
Dec 15 10:37:35 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/363509824' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 15 10:37:35 compute-0 ceph-mon[74356]: pgmap v100: 193 pgs: 62 peering, 131 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:35 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/363509824' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec 15 10:37:35 compute-0 ceph-mon[74356]: mgrmap e10: compute-0.difmqj(active, since 2m)
Dec 15 10:37:35 compute-0 python3[87650]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:37:36 compute-0 podman[87651]: 2025-12-15 10:37:36.000239192 +0000 UTC m=+0.042618616 container create 2a66e9df0936dd22d92e9a65dec3d244cb9a3a2d3666f3a29b5bca14af754f65 (image=quay.io/ceph/ceph:v19, name=nervous_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:37:36 compute-0 systemd[1]: Started libpod-conmon-2a66e9df0936dd22d92e9a65dec3d244cb9a3a2d3666f3a29b5bca14af754f65.scope.
Dec 15 10:37:36 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:37:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5926e0ee4717a8860950de2ab1e9eba698b83ef8a008e7ddcaca55391490ab11/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5926e0ee4717a8860950de2ab1e9eba698b83ef8a008e7ddcaca55391490ab11/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5926e0ee4717a8860950de2ab1e9eba698b83ef8a008e7ddcaca55391490ab11/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:36 compute-0 podman[87651]: 2025-12-15 10:37:36.06805031 +0000 UTC m=+0.110429754 container init 2a66e9df0936dd22d92e9a65dec3d244cb9a3a2d3666f3a29b5bca14af754f65 (image=quay.io/ceph/ceph:v19, name=nervous_cray, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 15 10:37:36 compute-0 podman[87651]: 2025-12-15 10:37:36.074300634 +0000 UTC m=+0.116680048 container start 2a66e9df0936dd22d92e9a65dec3d244cb9a3a2d3666f3a29b5bca14af754f65 (image=quay.io/ceph/ceph:v19, name=nervous_cray, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 15 10:37:36 compute-0 podman[87651]: 2025-12-15 10:37:35.981945442 +0000 UTC m=+0.024324886 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:37:36 compute-0 podman[87651]: 2025-12-15 10:37:36.077668795 +0000 UTC m=+0.120048209 container attach 2a66e9df0936dd22d92e9a65dec3d244cb9a3a2d3666f3a29b5bca14af754f65 (image=quay.io/ceph/ceph:v19, name=nervous_cray, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:37:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:37:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:37:36 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:37:36 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 15 10:37:36 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:36 compute-0 ceph-mgr[74651]: [progress INFO root] complete: finished ev 2be18237-e6f3-4639-8c29-51d7a4ccbb29 (Updating crash deployment (+1 -> 3))
Dec 15 10:37:36 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event 2be18237-e6f3-4639-8c29-51d7a4ccbb29 (Updating crash deployment (+1 -> 3)) in 2 seconds
Dec 15 10:37:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 15 10:37:36 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 15 10:37:36 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 15 10:37:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 15 10:37:36 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 15 10:37:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:37:36 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:37:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 15 10:37:36 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 15 10:37:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:37:36 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:37:36 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec 15 10:37:36 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec 15 10:37:36 compute-0 sudo[87689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:37:36 compute-0 sudo[87689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:36 compute-0 sudo[87689]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:36 compute-0 sudo[87714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 77365f67-614e-5a8d-b658-640395550c79 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 15 10:37:36 compute-0 sudo[87714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Dec 15 10:37:36 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2049069620' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec 15 10:37:36 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v101: 193 pgs: 62 peering, 131 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:36 compute-0 podman[87775]: 2025-12-15 10:37:36.801744967 +0000 UTC m=+0.039272936 container create 4c3e8ebcc3259101ac101aefb81db04c43691d809781d8e9be994c566965ab6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_lehmann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 15 10:37:36 compute-0 systemd[1]: Started libpod-conmon-4c3e8ebcc3259101ac101aefb81db04c43691d809781d8e9be994c566965ab6d.scope.
Dec 15 10:37:36 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:37:36 compute-0 podman[87775]: 2025-12-15 10:37:36.875056496 +0000 UTC m=+0.112584495 container init 4c3e8ebcc3259101ac101aefb81db04c43691d809781d8e9be994c566965ab6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_lehmann, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 15 10:37:36 compute-0 podman[87775]: 2025-12-15 10:37:36.784296646 +0000 UTC m=+0.021824625 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:37:36 compute-0 podman[87775]: 2025-12-15 10:37:36.882187559 +0000 UTC m=+0.119715558 container start 4c3e8ebcc3259101ac101aefb81db04c43691d809781d8e9be994c566965ab6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_lehmann, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 15 10:37:36 compute-0 ceph-mon[74356]: 4.1d scrub starts
Dec 15 10:37:36 compute-0 ceph-mon[74356]: 4.1d scrub ok
Dec 15 10:37:36 compute-0 ceph-mon[74356]: 2.12 scrub starts
Dec 15 10:37:36 compute-0 ceph-mon[74356]: 2.12 scrub ok
Dec 15 10:37:36 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:36 compute-0 modest_lehmann[87791]: 167 167
Dec 15 10:37:36 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:36 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:36 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:36 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 15 10:37:36 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 15 10:37:36 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:37:36 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 15 10:37:36 compute-0 ceph-mon[74356]: from='mgr.14122 192.168.122.100:0/4084253942' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:37:36 compute-0 ceph-mon[74356]: 6.c scrub starts
Dec 15 10:37:36 compute-0 ceph-mon[74356]: 6.c scrub ok
Dec 15 10:37:36 compute-0 podman[87775]: 2025-12-15 10:37:36.888468294 +0000 UTC m=+0.125996263 container attach 4c3e8ebcc3259101ac101aefb81db04c43691d809781d8e9be994c566965ab6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_lehmann, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True)
Dec 15 10:37:36 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/2049069620' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec 15 10:37:36 compute-0 systemd[1]: libpod-4c3e8ebcc3259101ac101aefb81db04c43691d809781d8e9be994c566965ab6d.scope: Deactivated successfully.
Dec 15 10:37:36 compute-0 podman[87775]: 2025-12-15 10:37:36.889701985 +0000 UTC m=+0.127229974 container died 4c3e8ebcc3259101ac101aefb81db04c43691d809781d8e9be994c566965ab6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:37:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-f40d6797194e99a09368c2152873e74722a1510d95de9e8d4fb5aa2ceb2369e5-merged.mount: Deactivated successfully.
Dec 15 10:37:36 compute-0 podman[87775]: 2025-12-15 10:37:36.935855985 +0000 UTC m=+0.173383944 container remove 4c3e8ebcc3259101ac101aefb81db04c43691d809781d8e9be994c566965ab6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_lehmann, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 15 10:37:36 compute-0 systemd[1]: libpod-conmon-4c3e8ebcc3259101ac101aefb81db04c43691d809781d8e9be994c566965ab6d.scope: Deactivated successfully.
Dec 15 10:37:37 compute-0 podman[87815]: 2025-12-15 10:37:37.10685257 +0000 UTC m=+0.051831196 container create c98db62131172a862ee42e36046a5e9157d13b102f81040e3c9c96b728a61de5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 15 10:37:37 compute-0 systemd[1]: Started libpod-conmon-c98db62131172a862ee42e36046a5e9157d13b102f81040e3c9c96b728a61de5.scope.
Dec 15 10:37:37 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:37:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b24d42aa1a2774e24397dd72f339d3e451d01b54eba5d4b7d28f62f91c0f78e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b24d42aa1a2774e24397dd72f339d3e451d01b54eba5d4b7d28f62f91c0f78e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b24d42aa1a2774e24397dd72f339d3e451d01b54eba5d4b7d28f62f91c0f78e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b24d42aa1a2774e24397dd72f339d3e451d01b54eba5d4b7d28f62f91c0f78e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b24d42aa1a2774e24397dd72f339d3e451d01b54eba5d4b7d28f62f91c0f78e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:37 compute-0 podman[87815]: 2025-12-15 10:37:37.080406135 +0000 UTC m=+0.025384801 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:37:37 compute-0 podman[87815]: 2025-12-15 10:37:37.179949822 +0000 UTC m=+0.124928468 container init c98db62131172a862ee42e36046a5e9157d13b102f81040e3c9c96b728a61de5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_varahamihira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:37:37 compute-0 podman[87815]: 2025-12-15 10:37:37.192808863 +0000 UTC m=+0.137787469 container start c98db62131172a862ee42e36046a5e9157d13b102f81040e3c9c96b728a61de5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 15 10:37:37 compute-0 podman[87815]: 2025-12-15 10:37:37.196769882 +0000 UTC m=+0.141748508 container attach c98db62131172a862ee42e36046a5e9157d13b102f81040e3c9c96b728a61de5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_varahamihira, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:37:37 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec 15 10:37:37 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec 15 10:37:37 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2049069620' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec 15 10:37:37 compute-0 ceph-mgr[74651]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 15 10:37:37 compute-0 ceph-mgr[74651]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 15 10:37:37 compute-0 ceph-mgr[74651]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 15 10:37:37 compute-0 ceph-mgr[74651]: mgr respawn  1: '-n'
Dec 15 10:37:37 compute-0 ceph-mgr[74651]: mgr respawn  2: 'mgr.compute-0.difmqj'
Dec 15 10:37:37 compute-0 ceph-mgr[74651]: mgr respawn  3: '-f'
Dec 15 10:37:37 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.difmqj(active, since 2m)
Dec 15 10:37:37 compute-0 systemd[1]: libpod-2a66e9df0936dd22d92e9a65dec3d244cb9a3a2d3666f3a29b5bca14af754f65.scope: Deactivated successfully.
Dec 15 10:37:37 compute-0 podman[87651]: 2025-12-15 10:37:37.361652417 +0000 UTC m=+1.404031831 container died 2a66e9df0936dd22d92e9a65dec3d244cb9a3a2d3666f3a29b5bca14af754f65 (image=quay.io/ceph/ceph:v19, name=nervous_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True)
Dec 15 10:37:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-5926e0ee4717a8860950de2ab1e9eba698b83ef8a008e7ddcaca55391490ab11-merged.mount: Deactivated successfully.
Dec 15 10:37:37 compute-0 podman[87651]: 2025-12-15 10:37:37.406615079 +0000 UTC m=+1.448994493 container remove 2a66e9df0936dd22d92e9a65dec3d244cb9a3a2d3666f3a29b5bca14af754f65 (image=quay.io/ceph/ceph:v19, name=nervous_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 15 10:37:37 compute-0 sshd-session[75905]: Connection closed by 192.168.122.100 port 46008
Dec 15 10:37:37 compute-0 sshd-session[75961]: Connection closed by 192.168.122.100 port 41716
Dec 15 10:37:37 compute-0 sshd-session[75934]: Connection closed by 192.168.122.100 port 46020
Dec 15 10:37:37 compute-0 sshd-session[75760]: Connection closed by 192.168.122.100 port 45958
Dec 15 10:37:37 compute-0 sshd-session[75990]: Connection closed by 192.168.122.100 port 41722
Dec 15 10:37:37 compute-0 sshd-session[75876]: Connection closed by 192.168.122.100 port 46000
Dec 15 10:37:37 compute-0 sshd-session[75847]: Connection closed by 192.168.122.100 port 45994
Dec 15 10:37:37 compute-0 sshd-session[75818]: Connection closed by 192.168.122.100 port 45978
Dec 15 10:37:37 compute-0 sshd-session[75789]: Connection closed by 192.168.122.100 port 45964
Dec 15 10:37:37 compute-0 sshd-session[75731]: Connection closed by 192.168.122.100 port 45956
Dec 15 10:37:37 compute-0 sshd-session[75702]: Connection closed by 192.168.122.100 port 45952
Dec 15 10:37:37 compute-0 sshd-session[75701]: Connection closed by 192.168.122.100 port 45946
Dec 15 10:37:37 compute-0 sshd-session[75757]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 15 10:37:37 compute-0 sshd-session[75844]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 15 10:37:37 compute-0 sshd-session[75987]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 15 10:37:37 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Dec 15 10:37:37 compute-0 sshd-session[75902]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 15 10:37:37 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Dec 15 10:37:37 compute-0 sshd-session[75728]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 15 10:37:37 compute-0 sshd-session[75958]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 15 10:37:37 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Dec 15 10:37:37 compute-0 sshd-session[75815]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 15 10:37:37 compute-0 sshd-session[75678]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 15 10:37:37 compute-0 sshd-session[75873]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 15 10:37:37 compute-0 sshd-session[75786]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 15 10:37:37 compute-0 systemd-logind[797]: Session 25 logged out. Waiting for processes to exit.
Dec 15 10:37:37 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Dec 15 10:37:37 compute-0 systemd[1]: session-32.scope: Deactivated successfully.
Dec 15 10:37:37 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Dec 15 10:37:37 compute-0 systemd[1]: session-21.scope: Deactivated successfully.
Dec 15 10:37:37 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Dec 15 10:37:37 compute-0 systemd-logind[797]: Session 30 logged out. Waiting for processes to exit.
Dec 15 10:37:37 compute-0 systemd-logind[797]: Session 29 logged out. Waiting for processes to exit.
Dec 15 10:37:37 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Dec 15 10:37:37 compute-0 systemd-logind[797]: Session 33 logged out. Waiting for processes to exit.
Dec 15 10:37:37 compute-0 systemd-logind[797]: Session 27 logged out. Waiting for processes to exit.
Dec 15 10:37:37 compute-0 systemd[1]: libpod-conmon-2a66e9df0936dd22d92e9a65dec3d244cb9a3a2d3666f3a29b5bca14af754f65.scope: Deactivated successfully.
Dec 15 10:37:37 compute-0 systemd-logind[797]: Session 32 logged out. Waiting for processes to exit.
Dec 15 10:37:37 compute-0 systemd-logind[797]: Session 24 logged out. Waiting for processes to exit.
Dec 15 10:37:37 compute-0 systemd-logind[797]: Session 26 logged out. Waiting for processes to exit.
Dec 15 10:37:37 compute-0 systemd-logind[797]: Session 21 logged out. Waiting for processes to exit.
Dec 15 10:37:37 compute-0 systemd-logind[797]: Session 28 logged out. Waiting for processes to exit.
Dec 15 10:37:37 compute-0 sudo[87648]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:37 compute-0 systemd-logind[797]: Removed session 25.
Dec 15 10:37:37 compute-0 systemd-logind[797]: Removed session 29.
Dec 15 10:37:37 compute-0 systemd-logind[797]: Removed session 30.
Dec 15 10:37:37 compute-0 systemd-logind[797]: Removed session 27.
Dec 15 10:37:37 compute-0 systemd-logind[797]: Removed session 32.
Dec 15 10:37:37 compute-0 systemd-logind[797]: Removed session 24.
Dec 15 10:37:37 compute-0 systemd-logind[797]: Removed session 21.
Dec 15 10:37:37 compute-0 sshd-session[75696]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 15 10:37:37 compute-0 sshd-session[75931]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 15 10:37:37 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ignoring --setuser ceph since I am not root
Dec 15 10:37:37 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ignoring --setgroup ceph since I am not root
Dec 15 10:37:37 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Dec 15 10:37:37 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Dec 15 10:37:37 compute-0 systemd-logind[797]: Removed session 26.
Dec 15 10:37:37 compute-0 systemd-logind[797]: Session 23 logged out. Waiting for processes to exit.
Dec 15 10:37:37 compute-0 systemd-logind[797]: Session 31 logged out. Waiting for processes to exit.
Dec 15 10:37:37 compute-0 systemd-logind[797]: Removed session 28.
Dec 15 10:37:37 compute-0 ceph-mgr[74651]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 15 10:37:37 compute-0 ceph-mgr[74651]: pidfile_write: ignore empty --pid-file
Dec 15 10:37:37 compute-0 systemd-logind[797]: Removed session 23.
Dec 15 10:37:37 compute-0 systemd-logind[797]: Removed session 31.
Dec 15 10:37:37 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'alerts'
Dec 15 10:37:37 compute-0 kind_varahamihira[87831]: --> passed data devices: 0 physical, 1 LVM
Dec 15 10:37:37 compute-0 kind_varahamihira[87831]: --> All data devices are unavailable
Dec 15 10:37:37 compute-0 systemd[1]: libpod-c98db62131172a862ee42e36046a5e9157d13b102f81040e3c9c96b728a61de5.scope: Deactivated successfully.
Dec 15 10:37:37 compute-0 podman[87815]: 2025-12-15 10:37:37.569948703 +0000 UTC m=+0.514927319 container died c98db62131172a862ee42e36046a5e9157d13b102f81040e3c9c96b728a61de5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_varahamihira, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:37:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b24d42aa1a2774e24397dd72f339d3e451d01b54eba5d4b7d28f62f91c0f78e-merged.mount: Deactivated successfully.
Dec 15 10:37:37 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:37.597+0000 7f1f61a32140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 15 10:37:37 compute-0 ceph-mgr[74651]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 15 10:37:37 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'balancer'
Dec 15 10:37:37 compute-0 podman[87815]: 2025-12-15 10:37:37.611526764 +0000 UTC m=+0.556505380 container remove c98db62131172a862ee42e36046a5e9157d13b102f81040e3c9c96b728a61de5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:37:37 compute-0 sudo[87714]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:37 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Dec 15 10:37:37 compute-0 systemd[1]: session-33.scope: Consumed 18.107s CPU time.
Dec 15 10:37:37 compute-0 systemd[1]: libpod-conmon-c98db62131172a862ee42e36046a5e9157d13b102f81040e3c9c96b728a61de5.scope: Deactivated successfully.
Dec 15 10:37:37 compute-0 systemd-logind[797]: Removed session 33.
Dec 15 10:37:37 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:37.692+0000 7f1f61a32140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 15 10:37:37 compute-0 ceph-mgr[74651]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 15 10:37:37 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'cephadm'
Dec 15 10:37:37 compute-0 sudo[87913]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyojohlcatiktyvvnypflkccxgbvkmpa ; /usr/bin/python3'
Dec 15 10:37:37 compute-0 sudo[87913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:37 compute-0 python3[87915]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:37:37 compute-0 ceph-mon[74356]: 2.11 scrub starts
Dec 15 10:37:37 compute-0 ceph-mon[74356]: 2.11 scrub ok
Dec 15 10:37:37 compute-0 ceph-mon[74356]: pgmap v101: 193 pgs: 62 peering, 131 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:37 compute-0 ceph-mon[74356]: 4.f scrub starts
Dec 15 10:37:37 compute-0 ceph-mon[74356]: 4.f scrub ok
Dec 15 10:37:37 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/2049069620' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec 15 10:37:37 compute-0 ceph-mon[74356]: mgrmap e11: compute-0.difmqj(active, since 2m)
Dec 15 10:37:37 compute-0 ceph-mon[74356]: 7.16 deep-scrub starts
Dec 15 10:37:37 compute-0 ceph-mon[74356]: 7.16 deep-scrub ok
Dec 15 10:37:37 compute-0 podman[87916]: 2025-12-15 10:37:37.960544773 +0000 UTC m=+0.052552720 container create bef0d4d3febda973f11fbf0d264570b11e46249cc5543781ff7428bac49cdf72 (image=quay.io/ceph/ceph:v19, name=zealous_carver, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 15 10:37:38 compute-0 systemd[1]: Started libpod-conmon-bef0d4d3febda973f11fbf0d264570b11e46249cc5543781ff7428bac49cdf72.scope.
Dec 15 10:37:38 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c053d1af654882faba88579b03d83b43549ef80a392118a36914ea52246c3c1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c053d1af654882faba88579b03d83b43549ef80a392118a36914ea52246c3c1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c053d1af654882faba88579b03d83b43549ef80a392118a36914ea52246c3c1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:38 compute-0 podman[87916]: 2025-12-15 10:37:37.942824934 +0000 UTC m=+0.034832851 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:37:38 compute-0 podman[87916]: 2025-12-15 10:37:38.049455503 +0000 UTC m=+0.141463450 container init bef0d4d3febda973f11fbf0d264570b11e46249cc5543781ff7428bac49cdf72 (image=quay.io/ceph/ceph:v19, name=zealous_carver, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325)
Dec 15 10:37:38 compute-0 podman[87916]: 2025-12-15 10:37:38.057282529 +0000 UTC m=+0.149290466 container start bef0d4d3febda973f11fbf0d264570b11e46249cc5543781ff7428bac49cdf72 (image=quay.io/ceph/ceph:v19, name=zealous_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:37:38 compute-0 podman[87916]: 2025-12-15 10:37:38.061061382 +0000 UTC m=+0.153069329 container attach bef0d4d3febda973f11fbf0d264570b11e46249cc5543781ff7428bac49cdf72 (image=quay.io/ceph/ceph:v19, name=zealous_carver, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:37:38 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Dec 15 10:37:38 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Dec 15 10:37:38 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'crash'
Dec 15 10:37:38 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "420316ee-1520-4a07-abed-ac56346e6610"} v 0)
Dec 15 10:37:38 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "420316ee-1520-4a07-abed-ac56346e6610"}]: dispatch
Dec 15 10:37:38 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Dec 15 10:37:38 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "420316ee-1520-4a07-abed-ac56346e6610"}]': finished
Dec 15 10:37:38 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e36 e36: 3 total, 2 up, 3 in
Dec 15 10:37:38 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 2 up, 3 in
Dec 15 10:37:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:38.528+0000 7f1f61a32140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 15 10:37:38 compute-0 ceph-mgr[74651]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 15 10:37:38 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'dashboard'
Dec 15 10:37:38 compute-0 ceph-mon[74356]: 4.3 scrub starts
Dec 15 10:37:38 compute-0 ceph-mon[74356]: from='client.? 192.168.122.102:0/48456279' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "420316ee-1520-4a07-abed-ac56346e6610"}]: dispatch
Dec 15 10:37:38 compute-0 ceph-mon[74356]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "420316ee-1520-4a07-abed-ac56346e6610"}]: dispatch
Dec 15 10:37:38 compute-0 ceph-mon[74356]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "420316ee-1520-4a07-abed-ac56346e6610"}]': finished
Dec 15 10:37:38 compute-0 ceph-mon[74356]: osdmap e36: 3 total, 2 up, 3 in
Dec 15 10:37:38 compute-0 ceph-mon[74356]: 7.15 scrub starts
Dec 15 10:37:38 compute-0 ceph-mon[74356]: 7.15 scrub ok
Dec 15 10:37:39 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'devicehealth'
Dec 15 10:37:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:39.221+0000 7f1f61a32140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 15 10:37:39 compute-0 ceph-mgr[74651]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 15 10:37:39 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'diskprediction_local'
Dec 15 10:37:39 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Dec 15 10:37:39 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Dec 15 10:37:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 15 10:37:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 15 10:37:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]:   from numpy import show_config as show_numpy_config
Dec 15 10:37:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:39.388+0000 7f1f61a32140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 15 10:37:39 compute-0 ceph-mgr[74651]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 15 10:37:39 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'influx'
Dec 15 10:37:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:39.459+0000 7f1f61a32140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 15 10:37:39 compute-0 ceph-mgr[74651]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 15 10:37:39 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'insights'
Dec 15 10:37:39 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'iostat'
Dec 15 10:37:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:39.618+0000 7f1f61a32140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 15 10:37:39 compute-0 ceph-mgr[74651]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 15 10:37:39 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'k8sevents'
Dec 15 10:37:39 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gxhwsu started
Dec 15 10:37:39 compute-0 ceph-mon[74356]: 4.3 scrub ok
Dec 15 10:37:39 compute-0 ceph-mon[74356]: from='client.? 192.168.122.102:0/1943811771' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 15 10:37:39 compute-0 ceph-mon[74356]: 6.1 scrub starts
Dec 15 10:37:39 compute-0 ceph-mon[74356]: 6.1 scrub ok
Dec 15 10:37:39 compute-0 ceph-mon[74356]: Standby manager daemon compute-2.gxhwsu started
Dec 15 10:37:39 compute-0 ceph-mon[74356]: 2.f scrub starts
Dec 15 10:37:39 compute-0 ceph-mon[74356]: 2.f scrub ok
Dec 15 10:37:39 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.difmqj(active, since 2m), standbys: compute-2.gxhwsu
Dec 15 10:37:40 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'localpool'
Dec 15 10:37:40 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'mds_autoscaler'
Dec 15 10:37:40 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.4 deep-scrub starts
Dec 15 10:37:40 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'mirroring'
Dec 15 10:37:40 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.4 deep-scrub ok
Dec 15 10:37:40 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'nfs'
Dec 15 10:37:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:40.709+0000 7f1f61a32140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 15 10:37:40 compute-0 ceph-mgr[74651]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 15 10:37:40 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'orchestrator'
Dec 15 10:37:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:40.949+0000 7f1f61a32140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 15 10:37:40 compute-0 ceph-mon[74356]: mgrmap e12: compute-0.difmqj(active, since 2m), standbys: compute-2.gxhwsu
Dec 15 10:37:40 compute-0 ceph-mon[74356]: 4.4 deep-scrub starts
Dec 15 10:37:40 compute-0 ceph-mon[74356]: 4.4 deep-scrub ok
Dec 15 10:37:40 compute-0 ceph-mon[74356]: 7.17 scrub starts
Dec 15 10:37:40 compute-0 ceph-mon[74356]: 7.17 scrub ok
Dec 15 10:37:40 compute-0 ceph-mgr[74651]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 15 10:37:40 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'osd_perf_query'
Dec 15 10:37:41 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:41.029+0000 7f1f61a32140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 15 10:37:41 compute-0 ceph-mgr[74651]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 15 10:37:41 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'osd_support'
Dec 15 10:37:41 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:41.096+0000 7f1f61a32140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 15 10:37:41 compute-0 ceph-mgr[74651]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 15 10:37:41 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'pg_autoscaler'
Dec 15 10:37:41 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:41.176+0000 7f1f61a32140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 15 10:37:41 compute-0 ceph-mgr[74651]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 15 10:37:41 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'progress'
Dec 15 10:37:41 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:37:41 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:41.250+0000 7f1f61a32140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 15 10:37:41 compute-0 ceph-mgr[74651]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 15 10:37:41 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'prometheus'
Dec 15 10:37:41 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Dec 15 10:37:41 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Dec 15 10:37:41 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:41.601+0000 7f1f61a32140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 15 10:37:41 compute-0 ceph-mgr[74651]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 15 10:37:41 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'rbd_support'
Dec 15 10:37:41 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:41.705+0000 7f1f61a32140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 15 10:37:41 compute-0 ceph-mgr[74651]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 15 10:37:41 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'restful'
Dec 15 10:37:41 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'rgw'
Dec 15 10:37:41 compute-0 ceph-mon[74356]: 2.b scrub starts
Dec 15 10:37:41 compute-0 ceph-mon[74356]: 2.b scrub ok
Dec 15 10:37:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:42.144+0000 7f1f61a32140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 15 10:37:42 compute-0 ceph-mgr[74651]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 15 10:37:42 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'rook'
Dec 15 10:37:42 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Dec 15 10:37:42 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Dec 15 10:37:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:42.764+0000 7f1f61a32140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 15 10:37:42 compute-0 ceph-mgr[74651]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 15 10:37:42 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'selftest'
Dec 15 10:37:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:42.836+0000 7f1f61a32140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 15 10:37:42 compute-0 ceph-mgr[74651]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 15 10:37:42 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'snap_schedule'
Dec 15 10:37:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:42.920+0000 7f1f61a32140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 15 10:37:42 compute-0 ceph-mgr[74651]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 15 10:37:42 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'stats'
Dec 15 10:37:42 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'status'
Dec 15 10:37:43 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:43.064+0000 7f1f61a32140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 15 10:37:43 compute-0 ceph-mgr[74651]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 15 10:37:43 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'telegraf'
Dec 15 10:37:43 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:43.138+0000 7f1f61a32140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 15 10:37:43 compute-0 ceph-mgr[74651]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 15 10:37:43 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'telemetry'
Dec 15 10:37:43 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:43.306+0000 7f1f61a32140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 15 10:37:43 compute-0 ceph-mgr[74651]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 15 10:37:43 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'test_orchestrator'
Dec 15 10:37:43 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Dec 15 10:37:43 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Dec 15 10:37:43 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:43.531+0000 7f1f61a32140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 15 10:37:43 compute-0 ceph-mgr[74651]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 15 10:37:43 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'volumes'
Dec 15 10:37:43 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:43.818+0000 7f1f61a32140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 15 10:37:43 compute-0 ceph-mgr[74651]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 15 10:37:43 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'zabbix'
Dec 15 10:37:43 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:43.903+0000 7f1f61a32140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 15 10:37:43 compute-0 ceph-mgr[74651]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 15 10:37:43 compute-0 ceph-mgr[74651]: ms_deliver_dispatch: unhandled message 0x557a3326ad00 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Dec 15 10:37:44 compute-0 ceph-mon[74356]: 5.5 scrub starts
Dec 15 10:37:44 compute-0 ceph-mon[74356]: 5.5 scrub ok
Dec 15 10:37:44 compute-0 ceph-mon[74356]: 2.3 scrub starts
Dec 15 10:37:44 compute-0 ceph-mon[74356]: 2.3 scrub ok
Dec 15 10:37:44 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : Active manager daemon compute-0.difmqj restarted
Dec 15 10:37:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Dec 15 10:37:44 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.difmqj
Dec 15 10:37:44 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Dec 15 10:37:44 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Dec 15 10:37:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e37 e37: 3 total, 2 up, 3 in
Dec 15 10:37:44 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 2 up, 3 in
Dec 15 10:37:44 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.difmqj(active, starting, since 0.389505s), standbys: compute-2.gxhwsu
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: mgr handle_mgr_map Activating!
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: mgr handle_mgr_map I am now activating
Dec 15 10:37:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 15 10:37:44 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 15 10:37:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 15 10:37:44 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:37:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 15 10:37:44 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 15 10:37:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.difmqj", "id": "compute-0.difmqj"} v 0)
Dec 15 10:37:44 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.difmqj", "id": "compute-0.difmqj"}]: dispatch
Dec 15 10:37:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.gxhwsu", "id": "compute-2.gxhwsu"} v 0)
Dec 15 10:37:44 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.gxhwsu", "id": "compute-2.gxhwsu"}]: dispatch
Dec 15 10:37:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 15 10:37:44 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:37:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 15 10:37:44 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 15 10:37:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 15 10:37:44 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 15 10:37:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 15 10:37:44 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 15 10:37:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).mds e1 all = 1
Dec 15 10:37:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 15 10:37:44 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 15 10:37:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 15 10:37:44 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: mgr load_all_metadata Skipping incomplete metadata entry
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: balancer
Dec 15 10:37:44 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : Manager daemon compute-0.difmqj is now available
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [balancer INFO root] Starting
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [balancer INFO root] Optimize plan auto_2025-12-15_10:37:44
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: cephadm
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: crash
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: dashboard
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO access_control] Loading user roles DB version=2
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO sso] Loading SSO DB version=1
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: devicehealth
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [devicehealth INFO root] Starting
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: iostat
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: nfs
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: orchestrator
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: pg_autoscaler
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: progress
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [progress INFO root] Loading...
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f1ee2de1df0>, <progress.module.GhostEvent object at 0x7f1edcd940a0>, <progress.module.GhostEvent object at 0x7f1edcd940d0>, <progress.module.GhostEvent object at 0x7f1edcd94100>, <progress.module.GhostEvent object at 0x7f1edcd94130>, <progress.module.GhostEvent object at 0x7f1edcd94160>, <progress.module.GhostEvent object at 0x7f1edcd94190>, <progress.module.GhostEvent object at 0x7f1edcd941c0>, <progress.module.GhostEvent object at 0x7f1edcd941f0>, <progress.module.GhostEvent object at 0x7f1edcd94220>] historic events
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [progress INFO root] Loaded OSDMap, ready.
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [rbd_support INFO root] recovery thread starting
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [rbd_support INFO root] starting setup
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: rbd_support
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: restful
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [restful INFO root] server_addr: :: server_port: 8003
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: status
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [restful WARNING root] server not running: no certificate configured
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: telemetry
Dec 15 10:37:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/mirror_snapshot_schedule"} v 0)
Dec 15 10:37:44 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/mirror_snapshot_schedule"}]: dispatch
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [rbd_support INFO root] PerfHandler: starting
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: volumes
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_task_task: images, start_after=
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [rbd_support INFO root] TaskHandler: starting
Dec 15 10:37:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/trash_purge_schedule"} v 0)
Dec 15 10:37:44 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/trash_purge_schedule"}]: dispatch
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [rbd_support INFO root] setup complete
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec 15 10:37:44 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.tlqguq started
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec 15 10:37:44 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec 15 10:37:45 compute-0 sshd-session[88087]: Accepted publickey for ceph-admin from 192.168.122.100 port 49390 ssh2: RSA SHA256:3azC1xJ0J6DfmukNQYX52hxlEZBJA1o57dtmni4O/KI
Dec 15 10:37:45 compute-0 systemd-logind[797]: New session 34 of user ceph-admin.
Dec 15 10:37:45 compute-0 systemd[1]: Started Session 34 of User ceph-admin.
Dec 15 10:37:45 compute-0 sshd-session[88087]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 15 10:37:45 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.module] Engine started.
Dec 15 10:37:45 compute-0 ceph-mon[74356]: 6.6 scrub starts
Dec 15 10:37:45 compute-0 ceph-mon[74356]: 6.6 scrub ok
Dec 15 10:37:45 compute-0 ceph-mon[74356]: 3.2 scrub starts
Dec 15 10:37:45 compute-0 ceph-mon[74356]: 3.2 scrub ok
Dec 15 10:37:45 compute-0 ceph-mon[74356]: 7.5 scrub starts
Dec 15 10:37:45 compute-0 ceph-mon[74356]: 7.5 scrub ok
Dec 15 10:37:45 compute-0 ceph-mon[74356]: Active manager daemon compute-0.difmqj restarted
Dec 15 10:37:45 compute-0 ceph-mon[74356]: Activating manager daemon compute-0.difmqj
Dec 15 10:37:45 compute-0 ceph-mon[74356]: osdmap e37: 3 total, 2 up, 3 in
Dec 15 10:37:45 compute-0 ceph-mon[74356]: mgrmap e13: compute-0.difmqj(active, starting, since 0.389505s), standbys: compute-2.gxhwsu
Dec 15 10:37:45 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 15 10:37:45 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:37:45 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 15 10:37:45 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.difmqj", "id": "compute-0.difmqj"}]: dispatch
Dec 15 10:37:45 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.gxhwsu", "id": "compute-2.gxhwsu"}]: dispatch
Dec 15 10:37:45 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:37:45 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 15 10:37:45 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:37:45 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 15 10:37:45 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 15 10:37:45 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 15 10:37:45 compute-0 ceph-mon[74356]: Manager daemon compute-0.difmqj is now available
Dec 15 10:37:45 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/mirror_snapshot_schedule"}]: dispatch
Dec 15 10:37:45 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/trash_purge_schedule"}]: dispatch
Dec 15 10:37:45 compute-0 ceph-mon[74356]: 2.2 scrub starts
Dec 15 10:37:45 compute-0 ceph-mon[74356]: 2.2 scrub ok
Dec 15 10:37:45 compute-0 ceph-mon[74356]: Standby manager daemon compute-1.tlqguq started
Dec 15 10:37:45 compute-0 sudo[88099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:37:45 compute-0 sudo[88099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:45 compute-0 sudo[88099]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:45 compute-0 sudo[88124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 15 10:37:45 compute-0 sudo[88124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:45 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Dec 15 10:37:45 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Dec 15 10:37:45 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.difmqj(active, since 1.41695s), standbys: compute-2.gxhwsu, compute-1.tlqguq
Dec 15 10:37:45 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.tlqguq", "id": "compute-1.tlqguq"} v 0)
Dec 15 10:37:45 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.tlqguq", "id": "compute-1.tlqguq"}]: dispatch
Dec 15 10:37:45 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='client.14268 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:37:45 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Dec 15 10:37:45 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v3: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:45 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:45 compute-0 zealous_carver[87932]: Option GRAFANA_API_USERNAME updated
Dec 15 10:37:45 compute-0 systemd[1]: libpod-bef0d4d3febda973f11fbf0d264570b11e46249cc5543781ff7428bac49cdf72.scope: Deactivated successfully.
Dec 15 10:37:45 compute-0 podman[87916]: 2025-12-15 10:37:45.589846039 +0000 UTC m=+7.681853946 container died bef0d4d3febda973f11fbf0d264570b11e46249cc5543781ff7428bac49cdf72 (image=quay.io/ceph/ceph:v19, name=zealous_carver, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 15 10:37:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c053d1af654882faba88579b03d83b43549ef80a392118a36914ea52246c3c1-merged.mount: Deactivated successfully.
Dec 15 10:37:45 compute-0 podman[87916]: 2025-12-15 10:37:45.662237917 +0000 UTC m=+7.754245834 container remove bef0d4d3febda973f11fbf0d264570b11e46249cc5543781ff7428bac49cdf72 (image=quay.io/ceph/ceph:v19, name=zealous_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True)
Dec 15 10:37:45 compute-0 sudo[87913]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:45 compute-0 systemd[1]: libpod-conmon-bef0d4d3febda973f11fbf0d264570b11e46249cc5543781ff7428bac49cdf72.scope: Deactivated successfully.
Dec 15 10:37:45 compute-0 ceph-mgr[74651]: [cephadm INFO cherrypy.error] [15/Dec/2025:10:37:45] ENGINE Bus STARTING
Dec 15 10:37:45 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : [15/Dec/2025:10:37:45] ENGINE Bus STARTING
Dec 15 10:37:45 compute-0 podman[88231]: 2025-12-15 10:37:45.79405242 +0000 UTC m=+0.050134491 container exec 79ed8dc51b1852fd831764c2817cfa3745ae4937bc9015bdb965e376ab3cc58d (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 15 10:37:45 compute-0 sudo[88285]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmpdbnjrwrxpxpmifndhiuhfoobycpmn ; /usr/bin/python3'
Dec 15 10:37:45 compute-0 sudo[88285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:45 compute-0 ceph-mgr[74651]: [cephadm INFO cherrypy.error] [15/Dec/2025:10:37:45] ENGINE Serving on https://192.168.122.100:7150
Dec 15 10:37:45 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : [15/Dec/2025:10:37:45] ENGINE Serving on https://192.168.122.100:7150
Dec 15 10:37:45 compute-0 ceph-mgr[74651]: [cephadm INFO cherrypy.error] [15/Dec/2025:10:37:45] ENGINE Client ('192.168.122.100', 42832) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 15 10:37:45 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : [15/Dec/2025:10:37:45] ENGINE Client ('192.168.122.100', 42832) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 15 10:37:45 compute-0 podman[88231]: 2025-12-15 10:37:45.893610318 +0000 UTC m=+0.149692369 container exec_died 79ed8dc51b1852fd831764c2817cfa3745ae4937bc9015bdb965e376ab3cc58d (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 15 10:37:45 compute-0 ceph-mgr[74651]: [cephadm INFO cherrypy.error] [15/Dec/2025:10:37:45] ENGINE Serving on http://192.168.122.100:8765
Dec 15 10:37:45 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : [15/Dec/2025:10:37:45] ENGINE Serving on http://192.168.122.100:8765
Dec 15 10:37:45 compute-0 ceph-mgr[74651]: [cephadm INFO cherrypy.error] [15/Dec/2025:10:37:45] ENGINE Bus STARTED
Dec 15 10:37:45 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : [15/Dec/2025:10:37:45] ENGINE Bus STARTED
Dec 15 10:37:46 compute-0 python3[88287]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Dec 15 10:37:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:37:46 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:37:46 compute-0 podman[88331]: 2025-12-15 10:37:46.05839531 +0000 UTC m=+0.044781246 container create e03479dd69c5d020eccde5359f139dacbadbd77cebdb3b06ec394480a8bf9071 (image=quay.io/ceph/ceph:v19, name=focused_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 15 10:37:46 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:46 compute-0 systemd[1]: Started libpod-conmon-e03479dd69c5d020eccde5359f139dacbadbd77cebdb3b06ec394480a8bf9071.scope.
Dec 15 10:37:46 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6cb7ec75c2bd7a5c7604518db462f84d8fa313dca2db8a231d2018c3d2197e6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6cb7ec75c2bd7a5c7604518db462f84d8fa313dca2db8a231d2018c3d2197e6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6cb7ec75c2bd7a5c7604518db462f84d8fa313dca2db8a231d2018c3d2197e6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:46 compute-0 podman[88331]: 2025-12-15 10:37:46.034982704 +0000 UTC m=+0.021368690 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:37:46 compute-0 podman[88331]: 2025-12-15 10:37:46.145954054 +0000 UTC m=+0.132340000 container init e03479dd69c5d020eccde5359f139dacbadbd77cebdb3b06ec394480a8bf9071 (image=quay.io/ceph/ceph:v19, name=focused_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:37:46 compute-0 podman[88331]: 2025-12-15 10:37:46.152960634 +0000 UTC m=+0.139346570 container start e03479dd69c5d020eccde5359f139dacbadbd77cebdb3b06ec394480a8bf9071 (image=quay.io/ceph/ceph:v19, name=focused_hermann, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 15 10:37:46 compute-0 podman[88331]: 2025-12-15 10:37:46.155467866 +0000 UTC m=+0.141853872 container attach e03479dd69c5d020eccde5359f139dacbadbd77cebdb3b06ec394480a8bf9071 (image=quay.io/ceph/ceph:v19, name=focused_hermann, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 15 10:37:46 compute-0 ceph-mon[74356]: 4.6 scrub starts
Dec 15 10:37:46 compute-0 ceph-mon[74356]: 4.6 scrub ok
Dec 15 10:37:46 compute-0 ceph-mon[74356]: mgrmap e14: compute-0.difmqj(active, since 1.41695s), standbys: compute-2.gxhwsu, compute-1.tlqguq
Dec 15 10:37:46 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.tlqguq", "id": "compute-1.tlqguq"}]: dispatch
Dec 15 10:37:46 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:46 compute-0 ceph-mon[74356]: 7.7 scrub starts
Dec 15 10:37:46 compute-0 ceph-mon[74356]: 7.7 scrub ok
Dec 15 10:37:46 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:46 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:46 compute-0 sudo[88124]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:37:46 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:37:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:37:46 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:37:46 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:37:46 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:46 compute-0 sudo[88395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:37:46 compute-0 sudo[88395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:46 compute-0 sudo[88395]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:46 compute-0 sudo[88429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 15 10:37:46 compute-0 sudo[88429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:46 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.2 deep-scrub starts
Dec 15 10:37:46 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.2 deep-scrub ok
Dec 15 10:37:46 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='client.14292 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:37:46 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v4: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Dec 15 10:37:46 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:46 compute-0 focused_hermann[88363]: Option GRAFANA_API_PASSWORD updated
Dec 15 10:37:46 compute-0 systemd[1]: libpod-e03479dd69c5d020eccde5359f139dacbadbd77cebdb3b06ec394480a8bf9071.scope: Deactivated successfully.
Dec 15 10:37:46 compute-0 podman[88331]: 2025-12-15 10:37:46.574268499 +0000 UTC m=+0.560654465 container died e03479dd69c5d020eccde5359f139dacbadbd77cebdb3b06ec394480a8bf9071 (image=quay.io/ceph/ceph:v19, name=focused_hermann, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 15 10:37:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6cb7ec75c2bd7a5c7604518db462f84d8fa313dca2db8a231d2018c3d2197e6-merged.mount: Deactivated successfully.
Dec 15 10:37:46 compute-0 podman[88331]: 2025-12-15 10:37:46.609886906 +0000 UTC m=+0.596272842 container remove e03479dd69c5d020eccde5359f139dacbadbd77cebdb3b06ec394480a8bf9071 (image=quay.io/ceph/ceph:v19, name=focused_hermann, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:37:46 compute-0 systemd[1]: libpod-conmon-e03479dd69c5d020eccde5359f139dacbadbd77cebdb3b06ec394480a8bf9071.scope: Deactivated successfully.
Dec 15 10:37:46 compute-0 sudo[88285]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:46 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.difmqj(active, since 2s), standbys: compute-2.gxhwsu, compute-1.tlqguq
Dec 15 10:37:46 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gxhwsu restarted
Dec 15 10:37:46 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gxhwsu started
Dec 15 10:37:46 compute-0 ceph-mgr[74651]: [devicehealth INFO root] Check health
Dec 15 10:37:46 compute-0 sudo[88521]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsbbntjdslyyfaytpsifnmpufthhtpco ; /usr/bin/python3'
Dec 15 10:37:46 compute-0 sudo[88521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:46 compute-0 sudo[88429]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:46 compute-0 sudo[88536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:37:46 compute-0 sudo[88536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:46 compute-0 sudo[88536]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:46 compute-0 python3[88523]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:37:47 compute-0 podman[88561]: 2025-12-15 10:37:47.045560781 +0000 UTC m=+0.046721310 container create ff32bbbabac9674f4d1c340619d2373d9da9cf3eb5d89bae0b64abe86dcf38d8 (image=quay.io/ceph/ceph:v19, name=elegant_pare, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:37:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:37:47 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:37:47 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec 15 10:37:47 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 15 10:37:47 compute-0 systemd[1]: Started libpod-conmon-ff32bbbabac9674f4d1c340619d2373d9da9cf3eb5d89bae0b64abe86dcf38d8.scope.
Dec 15 10:37:47 compute-0 podman[88561]: 2025-12-15 10:37:47.02567775 +0000 UTC m=+0.026838329 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:37:47 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:37:47 compute-0 sudo[88574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 15 10:37:47 compute-0 sudo[88574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5743fec756f383df3d2064844bcb226c435fac4aa41ce34f5b071cdd1d52ebc7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5743fec756f383df3d2064844bcb226c435fac4aa41ce34f5b071cdd1d52ebc7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5743fec756f383df3d2064844bcb226c435fac4aa41ce34f5b071cdd1d52ebc7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:47 compute-0 podman[88561]: 2025-12-15 10:37:47.140327711 +0000 UTC m=+0.141488270 container init ff32bbbabac9674f4d1c340619d2373d9da9cf3eb5d89bae0b64abe86dcf38d8 (image=quay.io/ceph/ceph:v19, name=elegant_pare, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:37:47 compute-0 podman[88561]: 2025-12-15 10:37:47.145050106 +0000 UTC m=+0.146210635 container start ff32bbbabac9674f4d1c340619d2373d9da9cf3eb5d89bae0b64abe86dcf38d8 (image=quay.io/ceph/ceph:v19, name=elegant_pare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:37:47 compute-0 podman[88561]: 2025-12-15 10:37:47.148078766 +0000 UTC m=+0.149239325 container attach ff32bbbabac9674f4d1c340619d2373d9da9cf3eb5d89bae0b64abe86dcf38d8 (image=quay.io/ceph/ceph:v19, name=elegant_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:37:47 compute-0 ceph-mon[74356]: 6.4 scrub starts
Dec 15 10:37:47 compute-0 ceph-mon[74356]: 6.4 scrub ok
Dec 15 10:37:47 compute-0 ceph-mon[74356]: [15/Dec/2025:10:37:45] ENGINE Bus STARTING
Dec 15 10:37:47 compute-0 ceph-mon[74356]: [15/Dec/2025:10:37:45] ENGINE Serving on https://192.168.122.100:7150
Dec 15 10:37:47 compute-0 ceph-mon[74356]: [15/Dec/2025:10:37:45] ENGINE Client ('192.168.122.100', 42832) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 15 10:37:47 compute-0 ceph-mon[74356]: [15/Dec/2025:10:37:45] ENGINE Serving on http://192.168.122.100:8765
Dec 15 10:37:47 compute-0 ceph-mon[74356]: [15/Dec/2025:10:37:45] ENGINE Bus STARTED
Dec 15 10:37:47 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:47 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:47 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:47 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:47 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:47 compute-0 ceph-mon[74356]: mgrmap e15: compute-0.difmqj(active, since 2s), standbys: compute-2.gxhwsu, compute-1.tlqguq
Dec 15 10:37:47 compute-0 ceph-mon[74356]: Standby manager daemon compute-2.gxhwsu restarted
Dec 15 10:37:47 compute-0 ceph-mon[74356]: Standby manager daemon compute-2.gxhwsu started
Dec 15 10:37:47 compute-0 ceph-mon[74356]: 2.5 scrub starts
Dec 15 10:37:47 compute-0 ceph-mon[74356]: 2.5 scrub ok
Dec 15 10:37:47 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:47 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:47 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 15 10:37:47 compute-0 sudo[88574]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:37:47 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:37:47 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Dec 15 10:37:47 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:47 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Dec 15 10:37:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec 15 10:37:47 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 15 10:37:47 compute-0 ceph-mgr[74651]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Dec 15 10:37:47 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Dec 15 10:37:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 15 10:37:47 compute-0 ceph-mgr[74651]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Dec 15 10:37:47 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Dec 15 10:37:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:37:47 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='client.14304 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:37:47 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Dec 15 10:37:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:37:47 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:47 compute-0 elegant_pare[88599]: Option ALERTMANAGER_API_HOST updated
Dec 15 10:37:47 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 15 10:37:47 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 15 10:37:47 compute-0 ceph-mgr[74651]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 127.9M
Dec 15 10:37:47 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 127.9M
Dec 15 10:37:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 15 10:37:47 compute-0 ceph-mgr[74651]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Dec 15 10:37:47 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Dec 15 10:37:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:37:47 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:37:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 15 10:37:47 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 15 10:37:47 compute-0 systemd[1]: libpod-ff32bbbabac9674f4d1c340619d2373d9da9cf3eb5d89bae0b64abe86dcf38d8.scope: Deactivated successfully.
Dec 15 10:37:47 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 15 10:37:47 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 15 10:37:47 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec 15 10:37:47 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec 15 10:37:47 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec 15 10:37:47 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec 15 10:37:47 compute-0 podman[88643]: 2025-12-15 10:37:47.549160799 +0000 UTC m=+0.027507501 container died ff32bbbabac9674f4d1c340619d2373d9da9cf3eb5d89bae0b64abe86dcf38d8 (image=quay.io/ceph/ceph:v19, name=elegant_pare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 15 10:37:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-5743fec756f383df3d2064844bcb226c435fac4aa41ce34f5b071cdd1d52ebc7-merged.mount: Deactivated successfully.
Dec 15 10:37:47 compute-0 sudo[88644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 15 10:37:47 compute-0 sudo[88644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:47 compute-0 sudo[88644]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:47 compute-0 podman[88643]: 2025-12-15 10:37:47.586290944 +0000 UTC m=+0.064637636 container remove ff32bbbabac9674f4d1c340619d2373d9da9cf3eb5d89bae0b64abe86dcf38d8 (image=quay.io/ceph/ceph:v19, name=elegant_pare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:37:47 compute-0 systemd[1]: libpod-conmon-ff32bbbabac9674f4d1c340619d2373d9da9cf3eb5d89bae0b64abe86dcf38d8.scope: Deactivated successfully.
Dec 15 10:37:47 compute-0 sudo[88521]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:47 compute-0 sudo[88682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph
Dec 15 10:37:47 compute-0 sudo[88682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:47 compute-0 sudo[88682]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:47 compute-0 sudo[88707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.conf.new
Dec 15 10:37:47 compute-0 sudo[88707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:47 compute-0 sudo[88707]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:47 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.difmqj(active, since 3s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:37:47 compute-0 sudo[88763]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzlkmuqwltfyelicrjjonswnsvtwtvwc ; /usr/bin/python3'
Dec 15 10:37:47 compute-0 sudo[88763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:47 compute-0 sudo[88749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:37:47 compute-0 sudo[88749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:47 compute-0 sudo[88749]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:47 compute-0 sudo[88783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.conf.new
Dec 15 10:37:47 compute-0 sudo[88783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:47 compute-0 sudo[88783]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:47 compute-0 python3[88780]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:37:47 compute-0 sudo[88831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.conf.new
Dec 15 10:37:47 compute-0 sudo[88831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:47 compute-0 sudo[88831]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:47 compute-0 podman[88838]: 2025-12-15 10:37:47.956761746 +0000 UTC m=+0.048902022 container create 917ed7681235183d1e118463dc9dc3cb6b66354689e30a6d91bd086e552db3bb (image=quay.io/ceph/ceph:v19, name=busy_curie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:37:47 compute-0 systemd[1]: Started libpod-conmon-917ed7681235183d1e118463dc9dc3cb6b66354689e30a6d91bd086e552db3bb.scope.
Dec 15 10:37:47 compute-0 sudo[88869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.conf.new
Dec 15 10:37:47 compute-0 sudo[88869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:47 compute-0 sudo[88869]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:48 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/632cfb4d2576d96b7b2c1b057d22d573dff5ba5e119aa1b6b4837c93f3a577bd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/632cfb4d2576d96b7b2c1b057d22d573dff5ba5e119aa1b6b4837c93f3a577bd/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/632cfb4d2576d96b7b2c1b057d22d573dff5ba5e119aa1b6b4837c93f3a577bd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:48 compute-0 podman[88838]: 2025-12-15 10:37:47.930568608 +0000 UTC m=+0.022708874 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:37:48 compute-0 podman[88838]: 2025-12-15 10:37:48.035926375 +0000 UTC m=+0.128066631 container init 917ed7681235183d1e118463dc9dc3cb6b66354689e30a6d91bd086e552db3bb (image=quay.io/ceph/ceph:v19, name=busy_curie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 15 10:37:48 compute-0 podman[88838]: 2025-12-15 10:37:48.042130299 +0000 UTC m=+0.134270535 container start 917ed7681235183d1e118463dc9dc3cb6b66354689e30a6d91bd086e552db3bb (image=quay.io/ceph/ceph:v19, name=busy_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 15 10:37:48 compute-0 podman[88838]: 2025-12-15 10:37:48.045886792 +0000 UTC m=+0.138027028 container attach 917ed7681235183d1e118463dc9dc3cb6b66354689e30a6d91bd086e552db3bb (image=quay.io/ceph/ceph:v19, name=busy_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec 15 10:37:48 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:37:48 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:37:48 compute-0 sudo[88899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 15 10:37:48 compute-0 sudo[88899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:48 compute-0 sudo[88899]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:48 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:37:48 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:37:48 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:37:48 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:37:48 compute-0 sudo[88925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config
Dec 15 10:37:48 compute-0 sudo[88925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:48 compute-0 sudo[88925]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:48 compute-0 sudo[88950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config
Dec 15 10:37:48 compute-0 sudo[88950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:48 compute-0 sudo[88950]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:48 compute-0 sudo[88994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf.new
Dec 15 10:37:48 compute-0 sudo[88994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:48 compute-0 sudo[88994]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:48 compute-0 ceph-mon[74356]: 4.2 deep-scrub starts
Dec 15 10:37:48 compute-0 ceph-mon[74356]: 4.2 deep-scrub ok
Dec 15 10:37:48 compute-0 ceph-mon[74356]: from='client.14292 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:37:48 compute-0 ceph-mon[74356]: pgmap v4: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:48 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:48 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:48 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 15 10:37:48 compute-0 ceph-mon[74356]: Adjusting osd_memory_target on compute-0 to 127.9M
Dec 15 10:37:48 compute-0 ceph-mon[74356]: Unable to set osd_memory_target on compute-0 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Dec 15 10:37:48 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:48 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:48 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:48 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 15 10:37:48 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:37:48 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 15 10:37:48 compute-0 ceph-mon[74356]: mgrmap e16: compute-0.difmqj(active, since 3s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:37:48 compute-0 ceph-mon[74356]: 7.1 scrub starts
Dec 15 10:37:48 compute-0 ceph-mon[74356]: 7.1 scrub ok
Dec 15 10:37:48 compute-0 sudo[89019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:37:48 compute-0 sudo[89019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:48 compute-0 sudo[89019]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:48 compute-0 sudo[89044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf.new
Dec 15 10:37:48 compute-0 sudo[89044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:48 compute-0 sudo[89044]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:48 compute-0 sudo[89092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf.new
Dec 15 10:37:48 compute-0 sudo[89092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:48 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='client.14310 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:37:48 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Dec 15 10:37:48 compute-0 sudo[89092]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:48 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Dec 15 10:37:48 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Dec 15 10:37:48 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:48 compute-0 busy_curie[88895]: Option PROMETHEUS_API_HOST updated
Dec 15 10:37:48 compute-0 systemd[1]: libpod-917ed7681235183d1e118463dc9dc3cb6b66354689e30a6d91bd086e552db3bb.scope: Deactivated successfully.
Dec 15 10:37:48 compute-0 podman[88838]: 2025-12-15 10:37:48.484253595 +0000 UTC m=+0.576393831 container died 917ed7681235183d1e118463dc9dc3cb6b66354689e30a6d91bd086e552db3bb (image=quay.io/ceph/ceph:v19, name=busy_curie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:37:48 compute-0 sudo[89118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf.new
Dec 15 10:37:48 compute-0 sudo[89118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:48 compute-0 sudo[89118]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-632cfb4d2576d96b7b2c1b057d22d573dff5ba5e119aa1b6b4837c93f3a577bd-merged.mount: Deactivated successfully.
Dec 15 10:37:48 compute-0 podman[88838]: 2025-12-15 10:37:48.526018212 +0000 UTC m=+0.618158448 container remove 917ed7681235183d1e118463dc9dc3cb6b66354689e30a6d91bd086e552db3bb (image=quay.io/ceph/ceph:v19, name=busy_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 15 10:37:48 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v5: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:48 compute-0 systemd[1]: libpod-conmon-917ed7681235183d1e118463dc9dc3cb6b66354689e30a6d91bd086e552db3bb.scope: Deactivated successfully.
Dec 15 10:37:48 compute-0 sudo[88763]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:48 compute-0 sudo[89151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf.new /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:37:48 compute-0 sudo[89151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:48 compute-0 sudo[89151]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:48 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:37:48 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:37:48 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:37:48 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:37:48 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:37:48 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:37:48 compute-0 sudo[89179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 15 10:37:48 compute-0 sudo[89179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:48 compute-0 sudo[89179]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:48 compute-0 sudo[89204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph
Dec 15 10:37:48 compute-0 sudo[89204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:48 compute-0 sudo[89204]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:48 compute-0 sudo[89264]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqumaccsqyiqhgcgzngzqhinnurtiico ; /usr/bin/python3'
Dec 15 10:37:48 compute-0 sudo[89264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:48 compute-0 sudo[89241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.client.admin.keyring.new
Dec 15 10:37:48 compute-0 sudo[89241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:48 compute-0 sudo[89241]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:48 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.difmqj(active, since 4s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:37:48 compute-0 sudo[89280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:37:48 compute-0 sudo[89280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:48 compute-0 sudo[89280]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:48 compute-0 sudo[89305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.client.admin.keyring.new
Dec 15 10:37:48 compute-0 sudo[89305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:48 compute-0 sudo[89305]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:48 compute-0 python3[89277]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:37:48 compute-0 podman[89333]: 2025-12-15 10:37:48.922492225 +0000 UTC m=+0.047323909 container create e1a7e17ec5d1724409e8018faf5a8d84ce8a361f0e1f171cd475b09acb15688b (image=quay.io/ceph/ceph:v19, name=silly_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 15 10:37:48 compute-0 systemd[1]: Started libpod-conmon-e1a7e17ec5d1724409e8018faf5a8d84ce8a361f0e1f171cd475b09acb15688b.scope.
Dec 15 10:37:48 compute-0 sudo[89366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.client.admin.keyring.new
Dec 15 10:37:48 compute-0 sudo[89366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:48 compute-0 sudo[89366]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:48 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1574c25f0e1e79c04944a6a344af50b3cc643c694e0a752017ed330af5e98a3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1574c25f0e1e79c04944a6a344af50b3cc643c694e0a752017ed330af5e98a3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1574c25f0e1e79c04944a6a344af50b3cc643c694e0a752017ed330af5e98a3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:48 compute-0 podman[89333]: 2025-12-15 10:37:48.99111908 +0000 UTC m=+0.115950784 container init e1a7e17ec5d1724409e8018faf5a8d84ce8a361f0e1f171cd475b09acb15688b (image=quay.io/ceph/ceph:v19, name=silly_colden, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:37:48 compute-0 podman[89333]: 2025-12-15 10:37:48.90062847 +0000 UTC m=+0.025460194 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:37:49 compute-0 podman[89333]: 2025-12-15 10:37:49.00422548 +0000 UTC m=+0.129057164 container start e1a7e17ec5d1724409e8018faf5a8d84ce8a361f0e1f171cd475b09acb15688b (image=quay.io/ceph/ceph:v19, name=silly_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:37:49 compute-0 podman[89333]: 2025-12-15 10:37:49.00792122 +0000 UTC m=+0.132752924 container attach e1a7e17ec5d1724409e8018faf5a8d84ce8a361f0e1f171cd475b09acb15688b (image=quay.io/ceph/ceph:v19, name=silly_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Dec 15 10:37:49 compute-0 sudo[89396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.client.admin.keyring.new
Dec 15 10:37:49 compute-0 sudo[89396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:49 compute-0 sudo[89396]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:49 compute-0 sudo[89422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 15 10:37:49 compute-0 sudo[89422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:49 compute-0 sudo[89422]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:49 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:37:49 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:37:49 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:37:49 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:37:49 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:37:49 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:37:49 compute-0 sudo[89447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config
Dec 15 10:37:49 compute-0 sudo[89447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:49 compute-0 sudo[89447]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:49 compute-0 sudo[89491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config
Dec 15 10:37:49 compute-0 sudo[89491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:49 compute-0 sudo[89491]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:49 compute-0 sudo[89516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring.new
Dec 15 10:37:49 compute-0 sudo[89516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:49 compute-0 sudo[89516]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:49 compute-0 ceph-mon[74356]: 6.0 scrub starts
Dec 15 10:37:49 compute-0 ceph-mon[74356]: 6.0 scrub ok
Dec 15 10:37:49 compute-0 ceph-mon[74356]: from='client.14304 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:37:49 compute-0 ceph-mon[74356]: Adjusting osd_memory_target on compute-1 to 127.9M
Dec 15 10:37:49 compute-0 ceph-mon[74356]: Unable to set osd_memory_target on compute-1 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Dec 15 10:37:49 compute-0 ceph-mon[74356]: Updating compute-0:/etc/ceph/ceph.conf
Dec 15 10:37:49 compute-0 ceph-mon[74356]: Updating compute-1:/etc/ceph/ceph.conf
Dec 15 10:37:49 compute-0 ceph-mon[74356]: Updating compute-2:/etc/ceph/ceph.conf
Dec 15 10:37:49 compute-0 ceph-mon[74356]: Updating compute-2:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:37:49 compute-0 ceph-mon[74356]: Updating compute-0:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:37:49 compute-0 ceph-mon[74356]: Updating compute-1:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:37:49 compute-0 ceph-mon[74356]: from='client.14310 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:37:49 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:49 compute-0 ceph-mon[74356]: mgrmap e17: compute-0.difmqj(active, since 4s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:37:49 compute-0 ceph-mon[74356]: 7.d scrub starts
Dec 15 10:37:49 compute-0 ceph-mon[74356]: 7.d scrub ok
Dec 15 10:37:49 compute-0 sudo[89541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:37:49 compute-0 sudo[89541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:49 compute-0 sudo[89541]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:49 compute-0 sudo[89566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring.new
Dec 15 10:37:49 compute-0 sudo[89566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:49 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='client.14316 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:37:49 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Dec 15 10:37:49 compute-0 sudo[89566]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:49 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:49 compute-0 silly_colden[89392]: Option GRAFANA_API_URL updated
Dec 15 10:37:49 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Dec 15 10:37:49 compute-0 systemd[1]: libpod-e1a7e17ec5d1724409e8018faf5a8d84ce8a361f0e1f171cd475b09acb15688b.scope: Deactivated successfully.
Dec 15 10:37:49 compute-0 podman[89333]: 2025-12-15 10:37:49.433783865 +0000 UTC m=+0.558615539 container died e1a7e17ec5d1724409e8018faf5a8d84ce8a361f0e1f171cd475b09acb15688b (image=quay.io/ceph/ceph:v19, name=silly_colden, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:37:49 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Dec 15 10:37:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1574c25f0e1e79c04944a6a344af50b3cc643c694e0a752017ed330af5e98a3-merged.mount: Deactivated successfully.
Dec 15 10:37:49 compute-0 podman[89333]: 2025-12-15 10:37:49.473331379 +0000 UTC m=+0.598163053 container remove e1a7e17ec5d1724409e8018faf5a8d84ce8a361f0e1f171cd475b09acb15688b (image=quay.io/ceph/ceph:v19, name=silly_colden, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Dec 15 10:37:49 compute-0 systemd[1]: libpod-conmon-e1a7e17ec5d1724409e8018faf5a8d84ce8a361f0e1f171cd475b09acb15688b.scope: Deactivated successfully.
Dec 15 10:37:49 compute-0 sudo[89616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring.new
Dec 15 10:37:49 compute-0 sudo[89616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:49 compute-0 sudo[89264]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:49 compute-0 sudo[89616]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:49 compute-0 sudo[89653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring.new
Dec 15 10:37:49 compute-0 sudo[89653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:49 compute-0 sudo[89653]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:49 compute-0 sudo[89678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring.new /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:37:49 compute-0 sudo[89678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:49 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:37:49 compute-0 sudo[89724]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itbadvzptlxpmomwaigoicsszlhaxnbx ; /usr/bin/python3'
Dec 15 10:37:49 compute-0 sudo[89678]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:49 compute-0 sudo[89724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:49 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:49 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:37:49 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:37:49 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:49 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:37:49 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:49 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:49 compute-0 python3[89728]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:37:49 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:37:49 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:49 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:37:49 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:49 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 15 10:37:49 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:49 compute-0 ceph-mgr[74651]: [progress INFO root] update: starting ev c2869bd7-c319-4f0f-adb0-39afe39f9c0d (Updating node-exporter deployment (+3 -> 3))
Dec 15 10:37:49 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Dec 15 10:37:49 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Dec 15 10:37:49 compute-0 podman[89729]: 2025-12-15 10:37:49.856066532 +0000 UTC m=+0.055544719 container create 5b8fed1dee98d492ae2a4a2a28af8d1d9b71edb17140ca0515eb21bfa2a72a2a (image=quay.io/ceph/ceph:v19, name=gracious_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 15 10:37:49 compute-0 sudo[89739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:37:49 compute-0 sudo[89739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:49 compute-0 sudo[89739]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:49 compute-0 systemd[1]: Started libpod-conmon-5b8fed1dee98d492ae2a4a2a28af8d1d9b71edb17140ca0515eb21bfa2a72a2a.scope.
Dec 15 10:37:49 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba2fc705a6947664aed5ec9179ed96313227b66381429c3b9031f8021e63dda0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba2fc705a6947664aed5ec9179ed96313227b66381429c3b9031f8021e63dda0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba2fc705a6947664aed5ec9179ed96313227b66381429c3b9031f8021e63dda0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:49 compute-0 podman[89729]: 2025-12-15 10:37:49.827180207 +0000 UTC m=+0.026658414 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:37:49 compute-0 sudo[89769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:37:49 compute-0 sudo[89769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:37:49 compute-0 podman[89729]: 2025-12-15 10:37:49.930474667 +0000 UTC m=+0.129952874 container init 5b8fed1dee98d492ae2a4a2a28af8d1d9b71edb17140ca0515eb21bfa2a72a2a (image=quay.io/ceph/ceph:v19, name=gracious_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 15 10:37:49 compute-0 podman[89729]: 2025-12-15 10:37:49.937088413 +0000 UTC m=+0.136566600 container start 5b8fed1dee98d492ae2a4a2a28af8d1d9b71edb17140ca0515eb21bfa2a72a2a (image=quay.io/ceph/ceph:v19, name=gracious_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:37:49 compute-0 podman[89729]: 2025-12-15 10:37:49.941438066 +0000 UTC m=+0.140916263 container attach 5b8fed1dee98d492ae2a4a2a28af8d1d9b71edb17140ca0515eb21bfa2a72a2a (image=quay.io/ceph/ceph:v19, name=gracious_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:37:50 compute-0 ceph-mon[74356]: 5.3 scrub starts
Dec 15 10:37:50 compute-0 ceph-mon[74356]: 5.3 scrub ok
Dec 15 10:37:50 compute-0 ceph-mon[74356]: pgmap v5: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:50 compute-0 ceph-mon[74356]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:37:50 compute-0 ceph-mon[74356]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:37:50 compute-0 ceph-mon[74356]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:37:50 compute-0 ceph-mon[74356]: Updating compute-0:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:37:50 compute-0 ceph-mon[74356]: Updating compute-2:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:37:50 compute-0 ceph-mon[74356]: Updating compute-1:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:37:50 compute-0 ceph-mon[74356]: from='client.14316 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:37:50 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:50 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:50 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:50 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:50 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:50 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:50 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:50 compute-0 ceph-mon[74356]: from='mgr.24110 192.168.122.100:0/781320137' entity='mgr.compute-0.difmqj' 
Dec 15 10:37:50 compute-0 ceph-mon[74356]: 7.c scrub starts
Dec 15 10:37:50 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Dec 15 10:37:50 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1858742975' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 15 10:37:50 compute-0 systemd[1]: Reloading.
Dec 15 10:37:50 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Dec 15 10:37:50 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Dec 15 10:37:50 compute-0 systemd-rc-local-generator[89883]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:37:50 compute-0 systemd-sysv-generator[89889]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:37:50 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v6: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:37:50 compute-0 systemd[1]: Reloading.
Dec 15 10:37:50 compute-0 systemd-rc-local-generator[89926]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:37:50 compute-0 systemd-sysv-generator[89930]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:37:50 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for 77365f67-614e-5a8d-b658-640395550c79...
Dec 15 10:37:51 compute-0 bash[89979]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Dec 15 10:37:51 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:37:51 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1858742975' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec 15 10:37:51 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.difmqj(active, since 7s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:37:51 compute-0 ceph-mgr[74651]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 15 10:37:51 compute-0 ceph-mgr[74651]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 15 10:37:51 compute-0 ceph-mgr[74651]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 15 10:37:51 compute-0 ceph-mgr[74651]: mgr respawn  1: '-n'
Dec 15 10:37:51 compute-0 ceph-mgr[74651]: mgr respawn  2: 'mgr.compute-0.difmqj'
Dec 15 10:37:51 compute-0 ceph-mgr[74651]: mgr respawn  3: '-f'
Dec 15 10:37:51 compute-0 ceph-mgr[74651]: mgr respawn  4: '--setuser'
Dec 15 10:37:51 compute-0 ceph-mgr[74651]: mgr respawn  5: 'ceph'
Dec 15 10:37:51 compute-0 ceph-mgr[74651]: mgr respawn  6: '--setgroup'
Dec 15 10:37:51 compute-0 ceph-mgr[74651]: mgr respawn  7: 'ceph'
Dec 15 10:37:51 compute-0 ceph-mgr[74651]: mgr respawn  8: '--default-log-to-file=false'
Dec 15 10:37:51 compute-0 ceph-mgr[74651]: mgr respawn  9: '--default-log-to-journald=true'
Dec 15 10:37:51 compute-0 ceph-mgr[74651]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 15 10:37:51 compute-0 ceph-mgr[74651]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 15 10:37:51 compute-0 ceph-mgr[74651]: mgr respawn  exe_path /proc/self/exe
Dec 15 10:37:51 compute-0 systemd[1]: libpod-5b8fed1dee98d492ae2a4a2a28af8d1d9b71edb17140ca0515eb21bfa2a72a2a.scope: Deactivated successfully.
Dec 15 10:37:51 compute-0 podman[89729]: 2025-12-15 10:37:51.38257651 +0000 UTC m=+1.582054717 container died 5b8fed1dee98d492ae2a4a2a28af8d1d9b71edb17140ca0515eb21bfa2a72a2a (image=quay.io/ceph/ceph:v19, name=gracious_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 15 10:37:51 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Dec 15 10:37:51 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Dec 15 10:37:51 compute-0 sshd-session[88098]: Connection closed by 192.168.122.100 port 49390
Dec 15 10:37:51 compute-0 sshd-session[88087]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 15 10:37:51 compute-0 systemd-logind[797]: Session 34 logged out. Waiting for processes to exit.
Dec 15 10:37:51 compute-0 ceph-mon[74356]: 5.0 scrub starts
Dec 15 10:37:51 compute-0 ceph-mon[74356]: 5.0 scrub ok
Dec 15 10:37:51 compute-0 ceph-mon[74356]: Deploying daemon node-exporter.compute-0 on compute-0
Dec 15 10:37:51 compute-0 ceph-mon[74356]: 7.c scrub ok
Dec 15 10:37:51 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/1858742975' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 15 10:37:51 compute-0 ceph-mon[74356]: 2.1c deep-scrub starts
Dec 15 10:37:51 compute-0 ceph-mon[74356]: 2.1c deep-scrub ok
Dec 15 10:37:51 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ignoring --setuser ceph since I am not root
Dec 15 10:37:51 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ignoring --setgroup ceph since I am not root
Dec 15 10:37:51 compute-0 ceph-mgr[74651]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 15 10:37:51 compute-0 ceph-mgr[74651]: pidfile_write: ignore empty --pid-file
Dec 15 10:37:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba2fc705a6947664aed5ec9179ed96313227b66381429c3b9031f8021e63dda0-merged.mount: Deactivated successfully.
Dec 15 10:37:51 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'alerts'
Dec 15 10:37:51 compute-0 podman[89729]: 2025-12-15 10:37:51.516706769 +0000 UTC m=+1.716184956 container remove 5b8fed1dee98d492ae2a4a2a28af8d1d9b71edb17140ca0515eb21bfa2a72a2a (image=quay.io/ceph/ceph:v19, name=gracious_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 15 10:37:51 compute-0 systemd[1]: libpod-conmon-5b8fed1dee98d492ae2a4a2a28af8d1d9b71edb17140ca0515eb21bfa2a72a2a.scope: Deactivated successfully.
Dec 15 10:37:51 compute-0 sudo[89724]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:51 compute-0 bash[89979]: Getting image source signatures
Dec 15 10:37:51 compute-0 bash[89979]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Dec 15 10:37:51 compute-0 bash[89979]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Dec 15 10:37:51 compute-0 bash[89979]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Dec 15 10:37:51 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:51.631+0000 7f8e082bb140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 15 10:37:51 compute-0 ceph-mgr[74651]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 15 10:37:51 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'balancer'
Dec 15 10:37:51 compute-0 sudo[90053]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfdrhcndetexemfjrikqueovkzzselqn ; /usr/bin/python3'
Dec 15 10:37:51 compute-0 sudo[90053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:51 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:51.737+0000 7f8e082bb140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 15 10:37:51 compute-0 ceph-mgr[74651]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 15 10:37:51 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'cephadm'
Dec 15 10:37:51 compute-0 python3[90055]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:37:52 compute-0 bash[89979]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Dec 15 10:37:52 compute-0 bash[89979]: Writing manifest to image destination
Dec 15 10:37:52 compute-0 podman[90094]: 2025-12-15 10:37:52.224725846 +0000 UTC m=+0.368883181 container create 79d97650c693ab103bcdc849a7e72457ea78f9c5bff6c059a97e4908335f3cba (image=quay.io/ceph/ceph:v19, name=interesting_bardeen, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:37:52 compute-0 podman[90094]: 2025-12-15 10:37:52.205933991 +0000 UTC m=+0.350091346 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:37:52 compute-0 podman[89979]: 2025-12-15 10:37:52.257166998 +0000 UTC m=+1.167022068 container create af7cec967afbf19ae8aa93bce2194c8b8c2dd9c500f2d7f44ece310d3a1d4cc1 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:37:52 compute-0 systemd[1]: Started libpod-conmon-79d97650c693ab103bcdc849a7e72457ea78f9c5bff6c059a97e4908335f3cba.scope.
Dec 15 10:37:52 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:37:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0d57fa9d9a894903ec32a7ea283c5a1e6ecadc100bd5a34536df4442b391b3d/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e817954e2c7d567c8464d32ccad13e878dd1ddd3179000c92c011be818d6fbba/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e817954e2c7d567c8464d32ccad13e878dd1ddd3179000c92c011be818d6fbba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e817954e2c7d567c8464d32ccad13e878dd1ddd3179000c92c011be818d6fbba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:52 compute-0 podman[89979]: 2025-12-15 10:37:52.306820532 +0000 UTC m=+1.216675632 container init af7cec967afbf19ae8aa93bce2194c8b8c2dd9c500f2d7f44ece310d3a1d4cc1 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:37:52 compute-0 podman[89979]: 2025-12-15 10:37:52.313023066 +0000 UTC m=+1.222878136 container start af7cec967afbf19ae8aa93bce2194c8b8c2dd9c500f2d7f44ece310d3a1d4cc1 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:37:52 compute-0 podman[90094]: 2025-12-15 10:37:52.315159495 +0000 UTC m=+0.459316860 container init 79d97650c693ab103bcdc849a7e72457ea78f9c5bff6c059a97e4908335f3cba (image=quay.io/ceph/ceph:v19, name=interesting_bardeen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:37:52 compute-0 podman[89979]: 2025-12-15 10:37:52.238809697 +0000 UTC m=+1.148664797 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.318Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.318Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Dec 15 10:37:52 compute-0 bash[89979]: af7cec967afbf19ae8aa93bce2194c8b8c2dd9c500f2d7f44ece310d3a1d4cc1
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.319Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.319Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.319Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.320Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=arp
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=bcache
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=bonding
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=cpu
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=dmi
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=edac
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=entropy
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=filefd
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=hwmon
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=netclass
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=netdev
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=netstat
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=nfs
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=nvme
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=os
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=pressure
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=rapl
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=selinux
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=softnet
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=stat
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=textfile
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=thermal_zone
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=time
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=uname
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=xfs
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.321Z caller=node_exporter.go:117 level=info collector=zfs
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.322Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[90138]: ts=2025-12-15T10:37:52.322Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Dec 15 10:37:52 compute-0 podman[90094]: 2025-12-15 10:37:52.322750073 +0000 UTC m=+0.466907418 container start 79d97650c693ab103bcdc849a7e72457ea78f9c5bff6c059a97e4908335f3cba (image=quay.io/ceph/ceph:v19, name=interesting_bardeen, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:37:52 compute-0 podman[90094]: 2025-12-15 10:37:52.325788233 +0000 UTC m=+0.469945588 container attach 79d97650c693ab103bcdc849a7e72457ea78f9c5bff6c059a97e4908335f3cba (image=quay.io/ceph/ceph:v19, name=interesting_bardeen, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 15 10:37:52 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for 77365f67-614e-5a8d-b658-640395550c79.
Dec 15 10:37:52 compute-0 sudo[89769]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:52 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Dec 15 10:37:52 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Dec 15 10:37:52 compute-0 systemd[1]: session-34.scope: Consumed 4.985s CPU time.
Dec 15 10:37:52 compute-0 systemd-logind[797]: Removed session 34.
Dec 15 10:37:52 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Dec 15 10:37:52 compute-0 ceph-mon[74356]: 3.7 scrub starts
Dec 15 10:37:52 compute-0 ceph-mon[74356]: 3.7 scrub ok
Dec 15 10:37:52 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/1858742975' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec 15 10:37:52 compute-0 ceph-mon[74356]: mgrmap e18: compute-0.difmqj(active, since 7s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:37:52 compute-0 ceph-mon[74356]: 4.0 scrub starts
Dec 15 10:37:52 compute-0 ceph-mon[74356]: 4.0 scrub ok
Dec 15 10:37:52 compute-0 ceph-mon[74356]: 7.19 deep-scrub starts
Dec 15 10:37:52 compute-0 ceph-mon[74356]: 7.19 deep-scrub ok
Dec 15 10:37:52 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'crash'
Dec 15 10:37:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:52.635+0000 7f8e082bb140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 15 10:37:52 compute-0 ceph-mgr[74651]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 15 10:37:52 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'dashboard'
Dec 15 10:37:52 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.tlqguq restarted
Dec 15 10:37:52 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.tlqguq started
Dec 15 10:37:52 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Dec 15 10:37:52 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2025472241' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec 15 10:37:53 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'devicehealth'
Dec 15 10:37:53 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:53.288+0000 7f8e082bb140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 15 10:37:53 compute-0 ceph-mgr[74651]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 15 10:37:53 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'diskprediction_local'
Dec 15 10:37:53 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.0 deep-scrub starts
Dec 15 10:37:53 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.0 deep-scrub ok
Dec 15 10:37:53 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 15 10:37:53 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 15 10:37:53 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]:   from numpy import show_config as show_numpy_config
Dec 15 10:37:53 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:53.469+0000 7f8e082bb140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 15 10:37:53 compute-0 ceph-mgr[74651]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 15 10:37:53 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'influx'
Dec 15 10:37:53 compute-0 ceph-mon[74356]: 4.7 scrub starts
Dec 15 10:37:53 compute-0 ceph-mon[74356]: 4.7 scrub ok
Dec 15 10:37:53 compute-0 ceph-mon[74356]: Standby manager daemon compute-1.tlqguq restarted
Dec 15 10:37:53 compute-0 ceph-mon[74356]: Standby manager daemon compute-1.tlqguq started
Dec 15 10:37:53 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/2025472241' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec 15 10:37:53 compute-0 ceph-mon[74356]: 7.1a scrub starts
Dec 15 10:37:53 compute-0 ceph-mon[74356]: 7.1a scrub ok
Dec 15 10:37:53 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2025472241' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec 15 10:37:53 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:53.539+0000 7f8e082bb140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 15 10:37:53 compute-0 ceph-mgr[74651]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 15 10:37:53 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'insights'
Dec 15 10:37:53 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.difmqj(active, since 9s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:37:53 compute-0 systemd[1]: libpod-79d97650c693ab103bcdc849a7e72457ea78f9c5bff6c059a97e4908335f3cba.scope: Deactivated successfully.
Dec 15 10:37:53 compute-0 podman[90094]: 2025-12-15 10:37:53.557857737 +0000 UTC m=+1.702015102 container died 79d97650c693ab103bcdc849a7e72457ea78f9c5bff6c059a97e4908335f3cba (image=quay.io/ceph/ceph:v19, name=interesting_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 15 10:37:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-e817954e2c7d567c8464d32ccad13e878dd1ddd3179000c92c011be818d6fbba-merged.mount: Deactivated successfully.
Dec 15 10:37:53 compute-0 podman[90094]: 2025-12-15 10:37:53.611282635 +0000 UTC m=+1.755439990 container remove 79d97650c693ab103bcdc849a7e72457ea78f9c5bff6c059a97e4908335f3cba (image=quay.io/ceph/ceph:v19, name=interesting_bardeen, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 15 10:37:53 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'iostat'
Dec 15 10:37:53 compute-0 systemd[1]: libpod-conmon-79d97650c693ab103bcdc849a7e72457ea78f9c5bff6c059a97e4908335f3cba.scope: Deactivated successfully.
Dec 15 10:37:53 compute-0 sudo[90053]: pam_unix(sudo:session): session closed for user root
Dec 15 10:37:53 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:53.684+0000 7f8e082bb140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 15 10:37:53 compute-0 ceph-mgr[74651]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 15 10:37:53 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'k8sevents'
Dec 15 10:37:54 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'localpool'
Dec 15 10:37:54 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'mds_autoscaler'
Dec 15 10:37:54 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Dec 15 10:37:54 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Dec 15 10:37:54 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'mirroring'
Dec 15 10:37:54 compute-0 python3[90258]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 15 10:37:54 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'nfs'
Dec 15 10:37:54 compute-0 ceph-mon[74356]: 3.0 deep-scrub starts
Dec 15 10:37:54 compute-0 ceph-mon[74356]: 3.0 deep-scrub ok
Dec 15 10:37:54 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/2025472241' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec 15 10:37:54 compute-0 ceph-mon[74356]: mgrmap e19: compute-0.difmqj(active, since 9s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:37:54 compute-0 ceph-mon[74356]: 7.0 scrub starts
Dec 15 10:37:54 compute-0 ceph-mon[74356]: 7.0 scrub ok
Dec 15 10:37:54 compute-0 ceph-mon[74356]: 5.6 scrub starts
Dec 15 10:37:54 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:54.694+0000 7f8e082bb140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 15 10:37:54 compute-0 ceph-mgr[74651]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 15 10:37:54 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'orchestrator'
Dec 15 10:37:54 compute-0 python3[90329]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765795074.1623726-37396-31759266476399/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 15 10:37:54 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:54.906+0000 7f8e082bb140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 15 10:37:54 compute-0 ceph-mgr[74651]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 15 10:37:54 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'osd_perf_query'
Dec 15 10:37:54 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:54.986+0000 7f8e082bb140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 15 10:37:54 compute-0 ceph-mgr[74651]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 15 10:37:54 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'osd_support'
Dec 15 10:37:55 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:55.060+0000 7f8e082bb140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 15 10:37:55 compute-0 ceph-mgr[74651]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 15 10:37:55 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'pg_autoscaler'
Dec 15 10:37:55 compute-0 sudo[90377]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yatjxhuaqmvgoxuvlwggpdvkvdrnztai ; /usr/bin/python3'
Dec 15 10:37:55 compute-0 sudo[90377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:37:55 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:55.144+0000 7f8e082bb140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 15 10:37:55 compute-0 ceph-mgr[74651]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 15 10:37:55 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'progress'
Dec 15 10:37:55 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:55.215+0000 7f8e082bb140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 15 10:37:55 compute-0 ceph-mgr[74651]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 15 10:37:55 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'prometheus'
Dec 15 10:37:55 compute-0 python3[90379]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:37:55 compute-0 podman[90380]: 2025-12-15 10:37:55.317237166 +0000 UTC m=+0.041759119 container create 84ec5704e97005b66478b932cce2c051a44fd8df68899f9df2813f1ff95cf117 (image=quay.io/ceph/ceph:v19, name=fervent_swirles, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec 15 10:37:55 compute-0 systemd[1]: Started libpod-conmon-84ec5704e97005b66478b932cce2c051a44fd8df68899f9df2813f1ff95cf117.scope.
Dec 15 10:37:55 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.c deep-scrub starts
Dec 15 10:37:55 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.c deep-scrub ok
Dec 15 10:37:55 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:37:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c313c3d778eb65d6c025eb306927db6b630a272a06b4a6ed6aa06fc52c50ab88/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c313c3d778eb65d6c025eb306927db6b630a272a06b4a6ed6aa06fc52c50ab88/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c313c3d778eb65d6c025eb306927db6b630a272a06b4a6ed6aa06fc52c50ab88/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:37:55 compute-0 podman[90380]: 2025-12-15 10:37:55.298847203 +0000 UTC m=+0.023369186 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:37:55 compute-0 podman[90380]: 2025-12-15 10:37:55.399724744 +0000 UTC m=+0.124246727 container init 84ec5704e97005b66478b932cce2c051a44fd8df68899f9df2813f1ff95cf117 (image=quay.io/ceph/ceph:v19, name=fervent_swirles, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 15 10:37:55 compute-0 podman[90380]: 2025-12-15 10:37:55.4075473 +0000 UTC m=+0.132069253 container start 84ec5704e97005b66478b932cce2c051a44fd8df68899f9df2813f1ff95cf117 (image=quay.io/ceph/ceph:v19, name=fervent_swirles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 15 10:37:55 compute-0 podman[90380]: 2025-12-15 10:37:55.411492559 +0000 UTC m=+0.136014532 container attach 84ec5704e97005b66478b932cce2c051a44fd8df68899f9df2813f1ff95cf117 (image=quay.io/ceph/ceph:v19, name=fervent_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 15 10:37:55 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:55.561+0000 7f8e082bb140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 15 10:37:55 compute-0 ceph-mgr[74651]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 15 10:37:55 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'rbd_support'
Dec 15 10:37:55 compute-0 ceph-mon[74356]: 5.6 scrub ok
Dec 15 10:37:55 compute-0 ceph-mon[74356]: 4.1f scrub starts
Dec 15 10:37:55 compute-0 ceph-mon[74356]: 4.1f scrub ok
Dec 15 10:37:55 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:55.660+0000 7f8e082bb140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 15 10:37:55 compute-0 ceph-mgr[74651]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 15 10:37:55 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'restful'
Dec 15 10:37:55 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'rgw'
Dec 15 10:37:56 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:56.119+0000 7f8e082bb140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 15 10:37:56 compute-0 ceph-mgr[74651]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 15 10:37:56 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'rook'
Dec 15 10:37:56 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:37:56 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.f scrub starts
Dec 15 10:37:56 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.f scrub ok
Dec 15 10:37:56 compute-0 ceph-mon[74356]: 5.c deep-scrub starts
Dec 15 10:37:56 compute-0 ceph-mon[74356]: 5.c deep-scrub ok
Dec 15 10:37:56 compute-0 ceph-mon[74356]: 6.1c scrub starts
Dec 15 10:37:56 compute-0 ceph-mon[74356]: 6.1c scrub ok
Dec 15 10:37:56 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:56.698+0000 7f8e082bb140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 15 10:37:56 compute-0 ceph-mgr[74651]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 15 10:37:56 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'selftest'
Dec 15 10:37:56 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:56.775+0000 7f8e082bb140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 15 10:37:56 compute-0 ceph-mgr[74651]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 15 10:37:56 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'snap_schedule'
Dec 15 10:37:56 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:56.865+0000 7f8e082bb140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 15 10:37:56 compute-0 ceph-mgr[74651]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 15 10:37:56 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'stats'
Dec 15 10:37:56 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'status'
Dec 15 10:37:57 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:57.029+0000 7f8e082bb140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'telegraf'
Dec 15 10:37:57 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:57.102+0000 7f8e082bb140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'telemetry'
Dec 15 10:37:57 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:57.265+0000 7f8e082bb140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'test_orchestrator'
Dec 15 10:37:57 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.b scrub starts
Dec 15 10:37:57 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.b scrub ok
Dec 15 10:37:57 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:57.491+0000 7f8e082bb140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'volumes'
Dec 15 10:37:57 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:57.766+0000 7f8e082bb140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'zabbix'
Dec 15 10:37:57 compute-0 ceph-mon[74356]: 6.f scrub starts
Dec 15 10:37:57 compute-0 ceph-mon[74356]: 6.f scrub ok
Dec 15 10:37:57 compute-0 ceph-mon[74356]: 5.1f deep-scrub starts
Dec 15 10:37:57 compute-0 ceph-mon[74356]: 5.1f deep-scrub ok
Dec 15 10:37:57 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:57.838+0000 7f8e082bb140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 15 10:37:57 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : Active manager daemon compute-0.difmqj restarted
Dec 15 10:37:57 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Dec 15 10:37:57 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.difmqj
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: ms_deliver_dispatch: unhandled message 0x563dcc267860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: mgr respawn  1: '-n'
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: mgr respawn  2: 'mgr.compute-0.difmqj'
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: mgr respawn  3: '-f'
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: mgr respawn  4: '--setuser'
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: mgr respawn  5: 'ceph'
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: mgr respawn  6: '--setgroup'
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: mgr respawn  7: 'ceph'
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: mgr respawn  8: '--default-log-to-file=false'
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: mgr respawn  9: '--default-log-to-journald=true'
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: mgr respawn  exe_path /proc/self/exe
Dec 15 10:37:57 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e38 e38: 3 total, 2 up, 3 in
Dec 15 10:37:57 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 2 up, 3 in
Dec 15 10:37:57 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.difmqj(active, starting, since 0.0309843s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:37:57 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ignoring --setuser ceph since I am not root
Dec 15 10:37:57 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ignoring --setgroup ceph since I am not root
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: pidfile_write: ignore empty --pid-file
Dec 15 10:37:57 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'alerts'
Dec 15 10:37:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:58.082+0000 7f5e0e0ac140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 15 10:37:58 compute-0 ceph-mgr[74651]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 15 10:37:58 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'balancer'
Dec 15 10:37:58 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gxhwsu restarted
Dec 15 10:37:58 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gxhwsu started
Dec 15 10:37:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:58.170+0000 7f5e0e0ac140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 15 10:37:58 compute-0 ceph-mgr[74651]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 15 10:37:58 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'cephadm'
Dec 15 10:37:58 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.d deep-scrub starts
Dec 15 10:37:58 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.d deep-scrub ok
Dec 15 10:37:58 compute-0 ceph-mon[74356]: 3.b scrub starts
Dec 15 10:37:58 compute-0 ceph-mon[74356]: 3.b scrub ok
Dec 15 10:37:58 compute-0 ceph-mon[74356]: Active manager daemon compute-0.difmqj restarted
Dec 15 10:37:58 compute-0 ceph-mon[74356]: Activating manager daemon compute-0.difmqj
Dec 15 10:37:58 compute-0 ceph-mon[74356]: osdmap e38: 3 total, 2 up, 3 in
Dec 15 10:37:58 compute-0 ceph-mon[74356]: mgrmap e20: compute-0.difmqj(active, starting, since 0.0309843s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:37:58 compute-0 ceph-mon[74356]: Standby manager daemon compute-2.gxhwsu restarted
Dec 15 10:37:58 compute-0 ceph-mon[74356]: Standby manager daemon compute-2.gxhwsu started
Dec 15 10:37:58 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.difmqj(active, starting, since 1.04939s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:37:58 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'crash'
Dec 15 10:37:59 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:59.048+0000 7f5e0e0ac140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 15 10:37:59 compute-0 ceph-mgr[74651]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 15 10:37:59 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'dashboard'
Dec 15 10:37:59 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.b scrub starts
Dec 15 10:37:59 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.b scrub ok
Dec 15 10:37:59 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'devicehealth'
Dec 15 10:37:59 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:59.729+0000 7f5e0e0ac140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 15 10:37:59 compute-0 ceph-mgr[74651]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 15 10:37:59 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'diskprediction_local'
Dec 15 10:37:59 compute-0 ceph-mon[74356]: 3.16 deep-scrub starts
Dec 15 10:37:59 compute-0 ceph-mon[74356]: 3.16 deep-scrub ok
Dec 15 10:37:59 compute-0 ceph-mon[74356]: 5.d deep-scrub starts
Dec 15 10:37:59 compute-0 ceph-mon[74356]: 5.d deep-scrub ok
Dec 15 10:37:59 compute-0 ceph-mon[74356]: mgrmap e21: compute-0.difmqj(active, starting, since 1.04939s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:37:59 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 15 10:37:59 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 15 10:37:59 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]:   from numpy import show_config as show_numpy_config
Dec 15 10:37:59 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:59.911+0000 7f5e0e0ac140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 15 10:37:59 compute-0 ceph-mgr[74651]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 15 10:37:59 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'influx'
Dec 15 10:37:59 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:37:59.991+0000 7f5e0e0ac140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 15 10:37:59 compute-0 ceph-mgr[74651]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 15 10:37:59 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'insights'
Dec 15 10:38:00 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'iostat'
Dec 15 10:38:00 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.tlqguq restarted
Dec 15 10:38:00 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.tlqguq started
Dec 15 10:38:00 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:38:00.133+0000 7f5e0e0ac140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 15 10:38:00 compute-0 ceph-mgr[74651]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 15 10:38:00 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'k8sevents'
Dec 15 10:38:00 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.a scrub starts
Dec 15 10:38:00 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.a scrub ok
Dec 15 10:38:00 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'localpool'
Dec 15 10:38:00 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'mds_autoscaler'
Dec 15 10:38:00 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'mirroring'
Dec 15 10:38:00 compute-0 ceph-mon[74356]: 5.10 scrub starts
Dec 15 10:38:00 compute-0 ceph-mon[74356]: 5.10 scrub ok
Dec 15 10:38:00 compute-0 ceph-mon[74356]: 4.b scrub starts
Dec 15 10:38:00 compute-0 ceph-mon[74356]: 4.b scrub ok
Dec 15 10:38:00 compute-0 ceph-mon[74356]: Standby manager daemon compute-1.tlqguq restarted
Dec 15 10:38:00 compute-0 ceph-mon[74356]: Standby manager daemon compute-1.tlqguq started
Dec 15 10:38:00 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'nfs'
Dec 15 10:38:00 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.difmqj(active, starting, since 3s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:38:01 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:38:01.190+0000 7f5e0e0ac140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 15 10:38:01 compute-0 ceph-mgr[74651]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 15 10:38:01 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'orchestrator'
Dec 15 10:38:01 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:38:01 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Dec 15 10:38:01 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Dec 15 10:38:01 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:38:01.453+0000 7f5e0e0ac140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 15 10:38:01 compute-0 ceph-mgr[74651]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 15 10:38:01 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'osd_perf_query'
Dec 15 10:38:01 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:38:01.533+0000 7f5e0e0ac140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 15 10:38:01 compute-0 ceph-mgr[74651]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 15 10:38:01 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'osd_support'
Dec 15 10:38:01 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:38:01.606+0000 7f5e0e0ac140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 15 10:38:01 compute-0 ceph-mgr[74651]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 15 10:38:01 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'pg_autoscaler'
Dec 15 10:38:01 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:38:01.696+0000 7f5e0e0ac140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 15 10:38:01 compute-0 ceph-mgr[74651]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 15 10:38:01 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'progress'
Dec 15 10:38:01 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:38:01.802+0000 7f5e0e0ac140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 15 10:38:01 compute-0 ceph-mgr[74651]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 15 10:38:01 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'prometheus'
Dec 15 10:38:01 compute-0 ceph-mon[74356]: 5.11 deep-scrub starts
Dec 15 10:38:01 compute-0 ceph-mon[74356]: 5.11 deep-scrub ok
Dec 15 10:38:01 compute-0 ceph-mon[74356]: 5.a scrub starts
Dec 15 10:38:01 compute-0 ceph-mon[74356]: 5.a scrub ok
Dec 15 10:38:01 compute-0 ceph-mon[74356]: 6.12 scrub starts
Dec 15 10:38:01 compute-0 ceph-mon[74356]: 6.12 scrub ok
Dec 15 10:38:01 compute-0 ceph-mon[74356]: mgrmap e22: compute-0.difmqj(active, starting, since 3s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:38:02 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:38:02.165+0000 7f5e0e0ac140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 15 10:38:02 compute-0 ceph-mgr[74651]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 15 10:38:02 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'rbd_support'
Dec 15 10:38:02 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:38:02.268+0000 7f5e0e0ac140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 15 10:38:02 compute-0 ceph-mgr[74651]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 15 10:38:02 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'restful'
Dec 15 10:38:02 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Dec 15 10:38:02 compute-0 systemd[75682]: Activating special unit Exit the Session...
Dec 15 10:38:02 compute-0 systemd[75682]: Stopped target Main User Target.
Dec 15 10:38:02 compute-0 systemd[75682]: Stopped target Basic System.
Dec 15 10:38:02 compute-0 systemd[75682]: Stopped target Paths.
Dec 15 10:38:02 compute-0 systemd[75682]: Stopped target Sockets.
Dec 15 10:38:02 compute-0 systemd[75682]: Stopped target Timers.
Dec 15 10:38:02 compute-0 systemd[75682]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 15 10:38:02 compute-0 systemd[75682]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 15 10:38:02 compute-0 systemd[75682]: Closed D-Bus User Message Bus Socket.
Dec 15 10:38:02 compute-0 systemd[75682]: Stopped Create User's Volatile Files and Directories.
Dec 15 10:38:02 compute-0 systemd[75682]: Removed slice User Application Slice.
Dec 15 10:38:02 compute-0 systemd[75682]: Reached target Shutdown.
Dec 15 10:38:02 compute-0 systemd[75682]: Finished Exit the Session.
Dec 15 10:38:02 compute-0 systemd[75682]: Reached target Exit the Session.
Dec 15 10:38:02 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Dec 15 10:38:02 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Dec 15 10:38:02 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec 15 10:38:02 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec 15 10:38:02 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec 15 10:38:02 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec 15 10:38:02 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Dec 15 10:38:02 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.b deep-scrub starts
Dec 15 10:38:02 compute-0 systemd[1]: user-42477.slice: Consumed 24.481s CPU time.
Dec 15 10:38:02 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.b deep-scrub ok
Dec 15 10:38:02 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'rgw'
Dec 15 10:38:02 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:38:02.721+0000 7f5e0e0ac140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 15 10:38:02 compute-0 ceph-mgr[74651]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 15 10:38:02 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'rook'
Dec 15 10:38:02 compute-0 sshd-session[90450]: Invalid user  from 115.190.87.147 port 58588
Dec 15 10:38:02 compute-0 ceph-mon[74356]: 6.9 scrub starts
Dec 15 10:38:02 compute-0 ceph-mon[74356]: 6.9 scrub ok
Dec 15 10:38:02 compute-0 ceph-mon[74356]: 3.14 scrub starts
Dec 15 10:38:02 compute-0 ceph-mon[74356]: 3.14 scrub ok
Dec 15 10:38:03 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:38:03.296+0000 7f5e0e0ac140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 15 10:38:03 compute-0 ceph-mgr[74651]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 15 10:38:03 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'selftest'
Dec 15 10:38:03 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:38:03.375+0000 7f5e0e0ac140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 15 10:38:03 compute-0 ceph-mgr[74651]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 15 10:38:03 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'snap_schedule'
Dec 15 10:38:03 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Dec 15 10:38:03 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:38:03.461+0000 7f5e0e0ac140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 15 10:38:03 compute-0 ceph-mgr[74651]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 15 10:38:03 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'stats'
Dec 15 10:38:03 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Dec 15 10:38:03 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'status'
Dec 15 10:38:03 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:38:03.619+0000 7f5e0e0ac140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 15 10:38:03 compute-0 ceph-mgr[74651]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 15 10:38:03 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'telegraf'
Dec 15 10:38:03 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:38:03.693+0000 7f5e0e0ac140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 15 10:38:03 compute-0 ceph-mgr[74651]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 15 10:38:03 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'telemetry'
Dec 15 10:38:03 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:38:03.853+0000 7f5e0e0ac140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 15 10:38:03 compute-0 ceph-mgr[74651]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 15 10:38:03 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'test_orchestrator'
Dec 15 10:38:03 compute-0 ceph-mon[74356]: 5.b deep-scrub starts
Dec 15 10:38:03 compute-0 ceph-mon[74356]: 5.b deep-scrub ok
Dec 15 10:38:03 compute-0 ceph-mon[74356]: 4.13 scrub starts
Dec 15 10:38:03 compute-0 ceph-mon[74356]: 4.13 scrub ok
Dec 15 10:38:03 compute-0 ceph-mon[74356]: 3.15 scrub starts
Dec 15 10:38:04 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:38:04.112+0000 7f5e0e0ac140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 15 10:38:04 compute-0 ceph-mgr[74651]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 15 10:38:04 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'volumes'
Dec 15 10:38:04 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:38:04.369+0000 7f5e0e0ac140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 15 10:38:04 compute-0 ceph-mgr[74651]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 15 10:38:04 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'zabbix'
Dec 15 10:38:04 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:38:04.440+0000 7f5e0e0ac140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 15 10:38:04 compute-0 ceph-mgr[74651]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 15 10:38:04 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : Active manager daemon compute-0.difmqj restarted
Dec 15 10:38:04 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Dec 15 10:38:04 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.difmqj
Dec 15 10:38:04 compute-0 ceph-mgr[74651]: ms_deliver_dispatch: unhandled message 0x564901f23860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 15 10:38:04 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.b scrub starts
Dec 15 10:38:04 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.b scrub ok
Dec 15 10:38:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e39 e39: 3 total, 2 up, 3 in
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: mgr handle_mgr_map Activating!
Dec 15 10:38:05 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 2 up, 3 in
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: mgr handle_mgr_map I am now activating
Dec 15 10:38:05 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.difmqj(active, starting, since 0.574049s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:38:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 15 10:38:05 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 15 10:38:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 15 10:38:05 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:38:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 15 10:38:05 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 15 10:38:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.difmqj", "id": "compute-0.difmqj"} v 0)
Dec 15 10:38:05 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.difmqj", "id": "compute-0.difmqj"}]: dispatch
Dec 15 10:38:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.tlqguq", "id": "compute-1.tlqguq"} v 0)
Dec 15 10:38:05 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.tlqguq", "id": "compute-1.tlqguq"}]: dispatch
Dec 15 10:38:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.gxhwsu", "id": "compute-2.gxhwsu"} v 0)
Dec 15 10:38:05 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.gxhwsu", "id": "compute-2.gxhwsu"}]: dispatch
Dec 15 10:38:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 15 10:38:05 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:38:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 15 10:38:05 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 15 10:38:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 15 10:38:05 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 15 10:38:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 15 10:38:05 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 15 10:38:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).mds e1 all = 1
Dec 15 10:38:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 15 10:38:05 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 15 10:38:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 15 10:38:05 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 15 10:38:05 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gxhwsu restarted
Dec 15 10:38:05 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gxhwsu started
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: mgr load_all_metadata Skipping incomplete metadata entry
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: balancer
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [balancer INFO root] Starting
Dec 15 10:38:05 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : Manager daemon compute-0.difmqj is now available
Dec 15 10:38:05 compute-0 ceph-mon[74356]: 5.8 scrub starts
Dec 15 10:38:05 compute-0 ceph-mon[74356]: 5.8 scrub ok
Dec 15 10:38:05 compute-0 ceph-mon[74356]: 3.15 scrub ok
Dec 15 10:38:05 compute-0 ceph-mon[74356]: Active manager daemon compute-0.difmqj restarted
Dec 15 10:38:05 compute-0 ceph-mon[74356]: Activating manager daemon compute-0.difmqj
Dec 15 10:38:05 compute-0 ceph-mon[74356]: 4.15 scrub starts
Dec 15 10:38:05 compute-0 ceph-mon[74356]: 4.15 scrub ok
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [balancer INFO root] Optimize plan auto_2025-12-15_10:38:05
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: cephadm
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: crash
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: dashboard
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO access_control] Loading user roles DB version=2
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO sso] Loading SSO DB version=1
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: devicehealth
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [devicehealth INFO root] Starting
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: iostat
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: nfs
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: orchestrator
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: pg_autoscaler
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: progress
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [progress INFO root] Loading...
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f5d89c09130>, <progress.module.GhostEvent object at 0x7f5d89c09100>, <progress.module.GhostEvent object at 0x7f5d89c090d0>, <progress.module.GhostEvent object at 0x7f5d89c09190>, <progress.module.GhostEvent object at 0x7f5d89c091c0>, <progress.module.GhostEvent object at 0x7f5d89c091f0>, <progress.module.GhostEvent object at 0x7f5d89c09220>, <progress.module.GhostEvent object at 0x7f5d89c09250>, <progress.module.GhostEvent object at 0x7f5d89c09280>, <progress.module.GhostEvent object at 0x7f5d89c092b0>] historic events
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] recovery thread starting
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] starting setup
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [progress INFO root] Loaded OSDMap, ready.
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: rbd_support
Dec 15 10:38:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/mirror_snapshot_schedule"} v 0)
Dec 15 10:38:05 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/mirror_snapshot_schedule"}]: dispatch
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: restful
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [restful INFO root] server_addr: :: server_port: 8003
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [restful WARNING root] server not running: no certificate configured
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: status
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: telemetry
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: volumes
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec 15 10:38:05 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Dec 15 10:38:05 compute-0 sshd-session[90565]: Accepted publickey for ceph-admin from 192.168.122.100 port 53996 ssh2: RSA SHA256:3azC1xJ0J6DfmukNQYX52hxlEZBJA1o57dtmni4O/KI
Dec 15 10:38:05 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Dec 15 10:38:05 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 15 10:38:05 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 15 10:38:05 compute-0 systemd-logind[797]: New session 35 of user ceph-admin.
Dec 15 10:38:05 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 15 10:38:05 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 15 10:38:05 compute-0 systemd[90580]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] PerfHandler: starting
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.module] Engine started.
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_task_task: images, start_after=
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] TaskHandler: starting
Dec 15 10:38:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/trash_purge_schedule"} v 0)
Dec 15 10:38:05 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/trash_purge_schedule"}]: dispatch
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 15 10:38:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] setup complete
Dec 15 10:38:05 compute-0 systemd[90580]: Queued start job for default target Main User Target.
Dec 15 10:38:05 compute-0 systemd[90580]: Created slice User Application Slice.
Dec 15 10:38:05 compute-0 systemd[90580]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 15 10:38:05 compute-0 systemd[90580]: Started Daily Cleanup of User's Temporary Directories.
Dec 15 10:38:05 compute-0 systemd[90580]: Reached target Paths.
Dec 15 10:38:05 compute-0 systemd[90580]: Reached target Timers.
Dec 15 10:38:05 compute-0 systemd[90580]: Starting D-Bus User Message Bus Socket...
Dec 15 10:38:05 compute-0 systemd[90580]: Starting Create User's Volatile Files and Directories...
Dec 15 10:38:05 compute-0 systemd[90580]: Listening on D-Bus User Message Bus Socket.
Dec 15 10:38:05 compute-0 systemd[90580]: Reached target Sockets.
Dec 15 10:38:05 compute-0 systemd[90580]: Finished Create User's Volatile Files and Directories.
Dec 15 10:38:05 compute-0 systemd[90580]: Reached target Basic System.
Dec 15 10:38:05 compute-0 systemd[90580]: Reached target Main User Target.
Dec 15 10:38:05 compute-0 systemd[90580]: Startup finished in 130ms.
Dec 15 10:38:05 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 15 10:38:05 compute-0 systemd[1]: Started Session 35 of User ceph-admin.
Dec 15 10:38:05 compute-0 sshd-session[90565]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 15 10:38:05 compute-0 sudo[90601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:38:05 compute-0 sudo[90601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:05 compute-0 sudo[90601]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:05 compute-0 sudo[90626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 15 10:38:05 compute-0 sudo[90626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:06 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='client.14343 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:38:06 compute-0 ceph-mgr[74651]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 15 10:38:06 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.difmqj(active, since 1.60206s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:38:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Dec 15 10:38:06 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec 15 10:38:06 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v3: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:38:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Dec 15 10:38:06 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec 15 10:38:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Dec 15 10:38:06 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec 15 10:38:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Dec 15 10:38:06 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0[74352]: 2025-12-15T10:38:06.055+0000 7f34fc006640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 15 10:38:06 compute-0 ceph-mon[74356]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 15 10:38:06 compute-0 ceph-mon[74356]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 15 10:38:06 compute-0 ceph-mon[74356]: 6.b scrub starts
Dec 15 10:38:06 compute-0 ceph-mon[74356]: 6.b scrub ok
Dec 15 10:38:06 compute-0 ceph-mon[74356]: osdmap e39: 3 total, 2 up, 3 in
Dec 15 10:38:06 compute-0 ceph-mon[74356]: mgrmap e23: compute-0.difmqj(active, starting, since 0.574049s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:38:06 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 15 10:38:06 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:38:06 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 15 10:38:06 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.difmqj", "id": "compute-0.difmqj"}]: dispatch
Dec 15 10:38:06 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.tlqguq", "id": "compute-1.tlqguq"}]: dispatch
Dec 15 10:38:06 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.gxhwsu", "id": "compute-2.gxhwsu"}]: dispatch
Dec 15 10:38:06 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:38:06 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 15 10:38:06 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:06 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 15 10:38:06 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 15 10:38:06 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 15 10:38:06 compute-0 ceph-mon[74356]: Standby manager daemon compute-2.gxhwsu restarted
Dec 15 10:38:06 compute-0 ceph-mon[74356]: Standby manager daemon compute-2.gxhwsu started
Dec 15 10:38:06 compute-0 ceph-mon[74356]: Manager daemon compute-0.difmqj is now available
Dec 15 10:38:06 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/mirror_snapshot_schedule"}]: dispatch
Dec 15 10:38:06 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/trash_purge_schedule"}]: dispatch
Dec 15 10:38:06 compute-0 ceph-mon[74356]: 6.17 scrub starts
Dec 15 10:38:06 compute-0 ceph-mon[74356]: 6.17 scrub ok
Dec 15 10:38:06 compute-0 ceph-mon[74356]: mgrmap e24: compute-0.difmqj(active, since 1.60206s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:38:06 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 15 10:38:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).mds e2 new map
Dec 15 10:38:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2025-12-15T10:38:06:056757+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-15T10:38:06.056696+0000
                                           modified        2025-12-15T10:38:06.056696+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Dec 15 10:38:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e40 e40: 3 total, 2 up, 3 in
Dec 15 10:38:06 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 2 up, 3 in
Dec 15 10:38:06 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec 15 10:38:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 15 10:38:06 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:06 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 15 10:38:06 compute-0 ceph-mgr[74651]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 15 10:38:06 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 15 10:38:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 15 10:38:06 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:06 compute-0 ceph-mgr[74651]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 15 10:38:06 compute-0 systemd[1]: libpod-84ec5704e97005b66478b932cce2c051a44fd8df68899f9df2813f1ff95cf117.scope: Deactivated successfully.
Dec 15 10:38:06 compute-0 podman[90380]: 2025-12-15 10:38:06.139117764 +0000 UTC m=+10.863639707 container died 84ec5704e97005b66478b932cce2c051a44fd8df68899f9df2813f1ff95cf117 (image=quay.io/ceph/ceph:v19, name=fervent_swirles, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:38:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-c313c3d778eb65d6c025eb306927db6b630a272a06b4a6ed6aa06fc52c50ab88-merged.mount: Deactivated successfully.
Dec 15 10:38:06 compute-0 podman[90380]: 2025-12-15 10:38:06.186183764 +0000 UTC m=+10.910705747 container remove 84ec5704e97005b66478b932cce2c051a44fd8df68899f9df2813f1ff95cf117 (image=quay.io/ceph/ceph:v19, name=fervent_swirles, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:38:06 compute-0 systemd[1]: libpod-conmon-84ec5704e97005b66478b932cce2c051a44fd8df68899f9df2813f1ff95cf117.scope: Deactivated successfully.
Dec 15 10:38:06 compute-0 sudo[90377]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:38:06 compute-0 sudo[90748]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxukhpubgeijzwxypgcivdqkaubfdlgx ; /usr/bin/python3'
Dec 15 10:38:06 compute-0 sudo[90748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:38:06 compute-0 podman[90757]: 2025-12-15 10:38:06.460794729 +0000 UTC m=+0.060766679 container exec 79ed8dc51b1852fd831764c2817cfa3745ae4937bc9015bdb965e376ab3cc58d (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 15 10:38:06 compute-0 ceph-mgr[74651]: [cephadm INFO cherrypy.error] [15/Dec/2025:10:38:06] ENGINE Bus STARTING
Dec 15 10:38:06 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : [15/Dec/2025:10:38:06] ENGINE Bus STARTING
Dec 15 10:38:06 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Dec 15 10:38:06 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Dec 15 10:38:06 compute-0 python3[90756]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:38:06 compute-0 ceph-mgr[74651]: [cephadm INFO cherrypy.error] [15/Dec/2025:10:38:06] ENGINE Serving on http://192.168.122.100:8765
Dec 15 10:38:06 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : [15/Dec/2025:10:38:06] ENGINE Serving on http://192.168.122.100:8765
Dec 15 10:38:06 compute-0 podman[90757]: 2025-12-15 10:38:06.595476446 +0000 UTC m=+0.195448386 container exec_died 79ed8dc51b1852fd831764c2817cfa3745ae4937bc9015bdb965e376ab3cc58d (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:38:06 compute-0 podman[90788]: 2025-12-15 10:38:06.624369542 +0000 UTC m=+0.061045679 container create 9ad09b00ff37e0699ce14f2895c1e4a430ffe38c2acb2e31388ebd842bdfbb73 (image=quay.io/ceph/ceph:v19, name=funny_goldberg, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:38:06 compute-0 systemd[1]: Started libpod-conmon-9ad09b00ff37e0699ce14f2895c1e4a430ffe38c2acb2e31388ebd842bdfbb73.scope.
Dec 15 10:38:06 compute-0 podman[90788]: 2025-12-15 10:38:06.589950215 +0000 UTC m=+0.026626352 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:38:06 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd453d216e7342f3a08a01adb1b0ae82b885ec2daf4809e518b7a745fe4dafb6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd453d216e7342f3a08a01adb1b0ae82b885ec2daf4809e518b7a745fe4dafb6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd453d216e7342f3a08a01adb1b0ae82b885ec2daf4809e518b7a745fe4dafb6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:06 compute-0 ceph-mgr[74651]: [cephadm INFO cherrypy.error] [15/Dec/2025:10:38:06] ENGINE Serving on https://192.168.122.100:7150
Dec 15 10:38:06 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : [15/Dec/2025:10:38:06] ENGINE Serving on https://192.168.122.100:7150
Dec 15 10:38:06 compute-0 ceph-mgr[74651]: [cephadm INFO cherrypy.error] [15/Dec/2025:10:38:06] ENGINE Bus STARTED
Dec 15 10:38:06 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : [15/Dec/2025:10:38:06] ENGINE Bus STARTED
Dec 15 10:38:06 compute-0 ceph-mgr[74651]: [cephadm INFO cherrypy.error] [15/Dec/2025:10:38:06] ENGINE Client ('192.168.122.100', 54598) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 15 10:38:06 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : [15/Dec/2025:10:38:06] ENGINE Client ('192.168.122.100', 54598) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 15 10:38:06 compute-0 podman[90788]: 2025-12-15 10:38:06.707931546 +0000 UTC m=+0.144607683 container init 9ad09b00ff37e0699ce14f2895c1e4a430ffe38c2acb2e31388ebd842bdfbb73 (image=quay.io/ceph/ceph:v19, name=funny_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 15 10:38:06 compute-0 podman[90788]: 2025-12-15 10:38:06.714017615 +0000 UTC m=+0.150693732 container start 9ad09b00ff37e0699ce14f2895c1e4a430ffe38c2acb2e31388ebd842bdfbb73 (image=quay.io/ceph/ceph:v19, name=funny_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:38:06 compute-0 podman[90788]: 2025-12-15 10:38:06.716972082 +0000 UTC m=+0.153648199 container attach 9ad09b00ff37e0699ce14f2895c1e4a430ffe38c2acb2e31388ebd842bdfbb73 (image=quay.io/ceph/ceph:v19, name=funny_goldberg, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 15 10:38:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:38:06 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:38:06 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:07 compute-0 podman[90934]: 2025-12-15 10:38:07.021182706 +0000 UTC m=+0.073137525 container exec af7cec967afbf19ae8aa93bce2194c8b8c2dd9c500f2d7f44ece310d3a1d4cc1 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:38:07 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v5: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:38:07 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:38:07 compute-0 podman[90934]: 2025-12-15 10:38:07.057531445 +0000 UTC m=+0.109486214 container exec_died af7cec967afbf19ae8aa93bce2194c8b8c2dd9c500f2d7f44ece310d3a1d4cc1 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:38:07 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='client.14373 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:38:07 compute-0 ceph-mgr[74651]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 15 10:38:07 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 15 10:38:07 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 15 10:38:07 compute-0 sudo[90626]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:07 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:38:07 compute-0 ceph-mon[74356]: 4.17 scrub starts
Dec 15 10:38:07 compute-0 ceph-mon[74356]: 4.17 scrub ok
Dec 15 10:38:07 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec 15 10:38:07 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec 15 10:38:07 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec 15 10:38:07 compute-0 ceph-mon[74356]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 15 10:38:07 compute-0 ceph-mon[74356]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 15 10:38:07 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 15 10:38:07 compute-0 ceph-mon[74356]: osdmap e40: 3 total, 2 up, 3 in
Dec 15 10:38:07 compute-0 ceph-mon[74356]: fsmap cephfs:0
Dec 15 10:38:07 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:07 compute-0 ceph-mon[74356]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 15 10:38:07 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:07 compute-0 ceph-mon[74356]: [15/Dec/2025:10:38:06] ENGINE Bus STARTING
Dec 15 10:38:07 compute-0 ceph-mon[74356]: [15/Dec/2025:10:38:06] ENGINE Serving on http://192.168.122.100:8765
Dec 15 10:38:07 compute-0 ceph-mon[74356]: [15/Dec/2025:10:38:06] ENGINE Serving on https://192.168.122.100:7150
Dec 15 10:38:07 compute-0 ceph-mon[74356]: [15/Dec/2025:10:38:06] ENGINE Bus STARTED
Dec 15 10:38:07 compute-0 ceph-mon[74356]: [15/Dec/2025:10:38:06] ENGINE Client ('192.168.122.100', 54598) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 15 10:38:07 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:07 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:07 compute-0 ceph-mon[74356]: 3.13 scrub starts
Dec 15 10:38:07 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:07 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.difmqj(active, since 2s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:38:07 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:38:07 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:07 compute-0 funny_goldberg[90837]: Scheduled mds.cephfs update...
Dec 15 10:38:07 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:07 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:38:07 compute-0 systemd[1]: libpod-9ad09b00ff37e0699ce14f2895c1e4a430ffe38c2acb2e31388ebd842bdfbb73.scope: Deactivated successfully.
Dec 15 10:38:07 compute-0 podman[90788]: 2025-12-15 10:38:07.45343757 +0000 UTC m=+0.890113707 container died 9ad09b00ff37e0699ce14f2895c1e4a430ffe38c2acb2e31388ebd842bdfbb73 (image=quay.io/ceph/ceph:v19, name=funny_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 15 10:38:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd453d216e7342f3a08a01adb1b0ae82b885ec2daf4809e518b7a745fe4dafb6-merged.mount: Deactivated successfully.
Dec 15 10:38:07 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:07 compute-0 podman[90788]: 2025-12-15 10:38:07.497801791 +0000 UTC m=+0.934477908 container remove 9ad09b00ff37e0699ce14f2895c1e4a430ffe38c2acb2e31388ebd842bdfbb73 (image=quay.io/ceph/ceph:v19, name=funny_goldberg, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:38:07 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:07 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.tlqguq restarted
Dec 15 10:38:07 compute-0 systemd[1]: libpod-conmon-9ad09b00ff37e0699ce14f2895c1e4a430ffe38c2acb2e31388ebd842bdfbb73.scope: Deactivated successfully.
Dec 15 10:38:07 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.tlqguq started
Dec 15 10:38:07 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Dec 15 10:38:07 compute-0 sudo[90748]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:07 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Dec 15 10:38:07 compute-0 ceph-mgr[74651]: [devicehealth INFO root] Check health
Dec 15 10:38:07 compute-0 sudo[90996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:38:07 compute-0 sudo[90996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:07 compute-0 sudo[90996]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:07 compute-0 sudo[91021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 15 10:38:07 compute-0 sudo[91021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:07 compute-0 sudo[91069]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hivgdgrhysbuoocbeurupeympweysurj ; /usr/bin/python3'
Dec 15 10:38:07 compute-0 sudo[91069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:38:07 compute-0 python3[91071]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:38:07 compute-0 podman[91074]: 2025-12-15 10:38:07.838183398 +0000 UTC m=+0.042307505 container create 3f615ed5a0e960a5264ef2db0e3548a7f1ba620e7c5cfd830ecb91896967886b (image=quay.io/ceph/ceph:v19, name=xenodochial_robinson, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 15 10:38:07 compute-0 systemd[1]: Started libpod-conmon-3f615ed5a0e960a5264ef2db0e3548a7f1ba620e7c5cfd830ecb91896967886b.scope.
Dec 15 10:38:07 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:38:07 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:07 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:38:07 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3176d762c1a65e85aed622d690decdc8691b6665fbf94815aa2585134fda4413/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3176d762c1a65e85aed622d690decdc8691b6665fbf94815aa2585134fda4413/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3176d762c1a65e85aed622d690decdc8691b6665fbf94815aa2585134fda4413/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:07 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:07 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec 15 10:38:07 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 15 10:38:07 compute-0 podman[91074]: 2025-12-15 10:38:07.821295566 +0000 UTC m=+0.025419713 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:38:07 compute-0 podman[91074]: 2025-12-15 10:38:07.926677073 +0000 UTC m=+0.130801210 container init 3f615ed5a0e960a5264ef2db0e3548a7f1ba620e7c5cfd830ecb91896967886b (image=quay.io/ceph/ceph:v19, name=xenodochial_robinson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:38:07 compute-0 podman[91074]: 2025-12-15 10:38:07.932487594 +0000 UTC m=+0.136611711 container start 3f615ed5a0e960a5264ef2db0e3548a7f1ba620e7c5cfd830ecb91896967886b (image=quay.io/ceph/ceph:v19, name=xenodochial_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 15 10:38:07 compute-0 podman[91074]: 2025-12-15 10:38:07.937598752 +0000 UTC m=+0.141722869 container attach 3f615ed5a0e960a5264ef2db0e3548a7f1ba620e7c5cfd830ecb91896967886b (image=quay.io/ceph/ceph:v19, name=xenodochial_robinson, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Dec 15 10:38:08 compute-0 sudo[91021]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:08 compute-0 sudo[91142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:38:08 compute-0 sudo[91142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:08 compute-0 sudo[91142]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:08 compute-0 sudo[91167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 15 10:38:08 compute-0 sudo[91167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:08 compute-0 ceph-mon[74356]: 4.16 scrub starts
Dec 15 10:38:08 compute-0 ceph-mon[74356]: 4.16 scrub ok
Dec 15 10:38:08 compute-0 ceph-mon[74356]: 3.13 scrub ok
Dec 15 10:38:08 compute-0 ceph-mon[74356]: pgmap v5: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:38:08 compute-0 ceph-mon[74356]: from='client.14373 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:38:08 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='client.14385 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:38:08 compute-0 ceph-mon[74356]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 15 10:38:08 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:08 compute-0 ceph-mon[74356]: mgrmap e25: compute-0.difmqj(active, since 2s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:38:08 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:08 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:08 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:08 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:08 compute-0 ceph-mon[74356]: Standby manager daemon compute-1.tlqguq restarted
Dec 15 10:38:08 compute-0 ceph-mon[74356]: Standby manager daemon compute-1.tlqguq started
Dec 15 10:38:08 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:08 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:08 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 15 10:38:08 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Dec 15 10:38:08 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Dec 15 10:38:08 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Dec 15 10:38:08 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Dec 15 10:38:08 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.difmqj(active, since 4s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:38:08 compute-0 sudo[91167]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:08 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:38:08 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:08 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:38:08 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:08 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 15 10:38:08 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 15 10:38:08 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:38:08 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:08 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:38:08 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:08 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 15 10:38:08 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 15 10:38:08 compute-0 ceph-mgr[74651]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 127.9M
Dec 15 10:38:08 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 127.9M
Dec 15 10:38:08 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 15 10:38:08 compute-0 ceph-mgr[74651]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Dec 15 10:38:08 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Dec 15 10:38:08 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:38:08 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:08 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 15 10:38:08 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 15 10:38:08 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 15 10:38:08 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 15 10:38:08 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec 15 10:38:08 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec 15 10:38:08 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec 15 10:38:08 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec 15 10:38:08 compute-0 sudo[91214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 15 10:38:08 compute-0 sudo[91214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:08 compute-0 sudo[91214]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:08 compute-0 sudo[91239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph
Dec 15 10:38:08 compute-0 sudo[91239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:08 compute-0 sudo[91239]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:08 compute-0 sudo[91264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.conf.new
Dec 15 10:38:08 compute-0 sudo[91264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:09 compute-0 sudo[91264]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:09 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v6: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:38:09 compute-0 sudo[91289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:38:09 compute-0 sudo[91289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:09 compute-0 sudo[91289]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:09 compute-0 sudo[91314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.conf.new
Dec 15 10:38:09 compute-0 sudo[91314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:09 compute-0 sudo[91314]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:09 compute-0 sudo[91362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.conf.new
Dec 15 10:38:09 compute-0 sudo[91362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:09 compute-0 sudo[91362]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:09 compute-0 sudo[91387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.conf.new
Dec 15 10:38:09 compute-0 sudo[91387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:09 compute-0 sudo[91387]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:09 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Dec 15 10:38:09 compute-0 sudo[91412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 15 10:38:09 compute-0 ceph-mon[74356]: 5.17 scrub starts
Dec 15 10:38:09 compute-0 ceph-mon[74356]: 5.17 scrub ok
Dec 15 10:38:09 compute-0 ceph-mon[74356]: 5.15 scrub starts
Dec 15 10:38:09 compute-0 ceph-mon[74356]: 5.15 scrub ok
Dec 15 10:38:09 compute-0 ceph-mon[74356]: from='client.14385 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 15 10:38:09 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Dec 15 10:38:09 compute-0 ceph-mon[74356]: mgrmap e26: compute-0.difmqj(active, since 4s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:38:09 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:09 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:09 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 15 10:38:09 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:09 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:09 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 15 10:38:09 compute-0 ceph-mon[74356]: Adjusting osd_memory_target on compute-1 to 127.9M
Dec 15 10:38:09 compute-0 ceph-mon[74356]: Unable to set osd_memory_target on compute-1 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Dec 15 10:38:09 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:09 compute-0 sudo[91412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:09 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 15 10:38:09 compute-0 ceph-mon[74356]: Updating compute-0:/etc/ceph/ceph.conf
Dec 15 10:38:09 compute-0 ceph-mon[74356]: Updating compute-1:/etc/ceph/ceph.conf
Dec 15 10:38:09 compute-0 ceph-mon[74356]: Updating compute-2:/etc/ceph/ceph.conf
Dec 15 10:38:09 compute-0 ceph-mon[74356]: 5.16 scrub starts
Dec 15 10:38:09 compute-0 sudo[91412]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:09 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:38:09 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:38:09 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Dec 15 10:38:09 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e41 e41: 3 total, 2 up, 3 in
Dec 15 10:38:09 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 2 up, 3 in
Dec 15 10:38:09 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 15 10:38:09 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 15 10:38:09 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:09 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 41 pg[8.0( empty local-lis/les=0/0 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:38:09 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Dec 15 10:38:09 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Dec 15 10:38:09 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:38:09 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:38:09 compute-0 sudo[91437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config
Dec 15 10:38:09 compute-0 sudo[91437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:09 compute-0 sudo[91437]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:09 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:38:09 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:38:09 compute-0 sudo[91462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config
Dec 15 10:38:09 compute-0 sudo[91462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:09 compute-0 sudo[91462]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:09 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.14 deep-scrub starts
Dec 15 10:38:09 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.14 deep-scrub ok
Dec 15 10:38:09 compute-0 sudo[91487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf.new
Dec 15 10:38:09 compute-0 sudo[91487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:09 compute-0 sudo[91487]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:09 compute-0 sudo[91512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:38:09 compute-0 sudo[91512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:09 compute-0 sudo[91512]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:09 compute-0 sudo[91537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf.new
Dec 15 10:38:09 compute-0 sudo[91537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:09 compute-0 sudo[91537]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:09 compute-0 sudo[91585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf.new
Dec 15 10:38:09 compute-0 sudo[91585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:09 compute-0 sudo[91585]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:09 compute-0 sudo[91610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf.new
Dec 15 10:38:09 compute-0 sudo[91610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:09 compute-0 sudo[91610]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:09 compute-0 sudo[91635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf.new /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:38:09 compute-0 sudo[91635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:09 compute-0 sudo[91635]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:09 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:38:09 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:38:09 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:38:09 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:38:09 compute-0 sudo[91660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 15 10:38:09 compute-0 sudo[91660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:09 compute-0 sudo[91660]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:09 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:38:09 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:38:09 compute-0 sudo[91685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph
Dec 15 10:38:09 compute-0 sudo[91685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:09 compute-0 sudo[91685]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:10 compute-0 sudo[91710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.client.admin.keyring.new
Dec 15 10:38:10 compute-0 sudo[91710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:10 compute-0 sudo[91710]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:10 compute-0 sudo[91735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:38:10 compute-0 sudo[91735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:10 compute-0 sudo[91735]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:10 compute-0 sudo[91760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.client.admin.keyring.new
Dec 15 10:38:10 compute-0 sudo[91760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:10 compute-0 sudo[91760]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:10 compute-0 sudo[91808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.client.admin.keyring.new
Dec 15 10:38:10 compute-0 sudo[91808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:10 compute-0 sudo[91808]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:10 compute-0 sudo[91833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.client.admin.keyring.new
Dec 15 10:38:10 compute-0 sudo[91833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:10 compute-0 sudo[91833]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:10 compute-0 ceph-mon[74356]: 3.12 scrub starts
Dec 15 10:38:10 compute-0 ceph-mon[74356]: 3.12 scrub ok
Dec 15 10:38:10 compute-0 ceph-mon[74356]: 5.16 scrub ok
Dec 15 10:38:10 compute-0 ceph-mon[74356]: pgmap v6: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:38:10 compute-0 ceph-mon[74356]: Updating compute-0:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:38:10 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Dec 15 10:38:10 compute-0 ceph-mon[74356]: osdmap e41: 3 total, 2 up, 3 in
Dec 15 10:38:10 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:10 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Dec 15 10:38:10 compute-0 ceph-mon[74356]: Updating compute-1:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:38:10 compute-0 ceph-mon[74356]: Updating compute-2:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:38:10 compute-0 ceph-mon[74356]: 3.10 scrub starts
Dec 15 10:38:10 compute-0 ceph-mon[74356]: 3.10 scrub ok
Dec 15 10:38:10 compute-0 ceph-mon[74356]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:38:10 compute-0 ceph-mon[74356]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:38:10 compute-0 ceph-mon[74356]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:38:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Dec 15 10:38:10 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Dec 15 10:38:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e42 e42: 3 total, 2 up, 3 in
Dec 15 10:38:10 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 2 up, 3 in
Dec 15 10:38:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 15 10:38:10 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:10 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 15 10:38:10 compute-0 sudo[91858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 15 10:38:10 compute-0 sudo[91858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:10 compute-0 sudo[91858]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 42 pg[8.0( empty local-lis/les=41/42 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:38:10 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:38:10 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:38:10 compute-0 ceph-mgr[74651]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Dec 15 10:38:10 compute-0 ceph-mgr[74651]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 15 10:38:10 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 15 10:38:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 15 10:38:10 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:10 compute-0 ceph-mgr[74651]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 15 10:38:10 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 15 10:38:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 15 10:38:10 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:38:10 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:38:10 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:10 compute-0 sudo[91893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config
Dec 15 10:38:10 compute-0 systemd[1]: libpod-3f615ed5a0e960a5264ef2db0e3548a7f1ba620e7c5cfd830ecb91896967886b.scope: Deactivated successfully.
Dec 15 10:38:10 compute-0 sudo[91893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:10 compute-0 podman[91074]: 2025-12-15 10:38:10.461257286 +0000 UTC m=+2.665381413 container died 3f615ed5a0e960a5264ef2db0e3548a7f1ba620e7c5cfd830ecb91896967886b (image=quay.io/ceph/ceph:v19, name=xenodochial_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 15 10:38:10 compute-0 sudo[91893]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:10 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.14 deep-scrub starts
Dec 15 10:38:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-3176d762c1a65e85aed622d690decdc8691b6665fbf94815aa2585134fda4413-merged.mount: Deactivated successfully.
Dec 15 10:38:10 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.14 deep-scrub ok
Dec 15 10:38:10 compute-0 podman[91074]: 2025-12-15 10:38:10.504468271 +0000 UTC m=+2.708592398 container remove 3f615ed5a0e960a5264ef2db0e3548a7f1ba620e7c5cfd830ecb91896967886b (image=quay.io/ceph/ceph:v19, name=xenodochial_robinson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:38:10 compute-0 systemd[1]: libpod-conmon-3f615ed5a0e960a5264ef2db0e3548a7f1ba620e7c5cfd830ecb91896967886b.scope: Deactivated successfully.
Dec 15 10:38:10 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:38:10 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:38:10 compute-0 sudo[91069]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:10 compute-0 sudo[91926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config
Dec 15 10:38:10 compute-0 sudo[91926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:10 compute-0 sudo[91926]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:10 compute-0 sudo[91955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring.new
Dec 15 10:38:10 compute-0 sudo[91955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:10 compute-0 sudo[91955]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:10 compute-0 sudo[91980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:38:10 compute-0 sudo[91980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:10 compute-0 sudo[91980]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:10 compute-0 sudo[92005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring.new
Dec 15 10:38:10 compute-0 sudo[92005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:10 compute-0 sudo[92005]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:10 compute-0 sudo[92053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring.new
Dec 15 10:38:10 compute-0 sudo[92053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:10 compute-0 sudo[92053]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:10 compute-0 sudo[92078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring.new
Dec 15 10:38:10 compute-0 sudo[92078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:10 compute-0 sudo[92078]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:10 compute-0 sudo[92103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring.new /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:38:10 compute-0 sudo[92103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:10 compute-0 sudo[92103]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:38:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:38:10 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:38:11 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:11 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:38:11 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:11 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v9: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:38:11 compute-0 sudo[92151]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqgpehszfszntjzlgtrtlwjstkpllqlc ; /usr/bin/python3'
Dec 15 10:38:11 compute-0 sudo[92151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:38:11 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:11 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:38:11 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:11 compute-0 python3[92153]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:38:11 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:38:11 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:11 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 15 10:38:11 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:11 compute-0 ceph-mgr[74651]: [progress INFO root] update: starting ev 86cf0de5-a14d-4462-9309-460251a6dce3 (Updating node-exporter deployment (+2 -> 3))
Dec 15 10:38:11 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Dec 15 10:38:11 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Dec 15 10:38:11 compute-0 podman[92154]: 2025-12-15 10:38:11.250330775 +0000 UTC m=+0.039479303 container create a4fe3b0c9eacda9c564cc90700d090bbb19febe4ac0e89d9b78af782809513c9 (image=quay.io/ceph/ceph:v19, name=dazzling_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 15 10:38:11 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:38:11 compute-0 systemd[1]: Started libpod-conmon-a4fe3b0c9eacda9c564cc90700d090bbb19febe4ac0e89d9b78af782809513c9.scope.
Dec 15 10:38:11 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dbeb297ae912df67b81c0ec57c0379f6a0d831f7ea1c472b0dc36c5b912e21b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dbeb297ae912df67b81c0ec57c0379f6a0d831f7ea1c472b0dc36c5b912e21b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:11 compute-0 podman[92154]: 2025-12-15 10:38:11.329339421 +0000 UTC m=+0.118487969 container init a4fe3b0c9eacda9c564cc90700d090bbb19febe4ac0e89d9b78af782809513c9 (image=quay.io/ceph/ceph:v19, name=dazzling_blackburn, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 15 10:38:11 compute-0 podman[92154]: 2025-12-15 10:38:11.234816948 +0000 UTC m=+0.023965496 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:38:11 compute-0 podman[92154]: 2025-12-15 10:38:11.335821123 +0000 UTC m=+0.124969651 container start a4fe3b0c9eacda9c564cc90700d090bbb19febe4ac0e89d9b78af782809513c9 (image=quay.io/ceph/ceph:v19, name=dazzling_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:38:11 compute-0 podman[92154]: 2025-12-15 10:38:11.339639028 +0000 UTC m=+0.128787586 container attach a4fe3b0c9eacda9c564cc90700d090bbb19febe4ac0e89d9b78af782809513c9 (image=quay.io/ceph/ceph:v19, name=dazzling_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:38:11 compute-0 ceph-mon[74356]: 5.14 deep-scrub starts
Dec 15 10:38:11 compute-0 ceph-mon[74356]: 5.14 deep-scrub ok
Dec 15 10:38:11 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Dec 15 10:38:11 compute-0 ceph-mon[74356]: osdmap e42: 3 total, 2 up, 3 in
Dec 15 10:38:11 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:11 compute-0 ceph-mon[74356]: Updating compute-0:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:38:11 compute-0 ceph-mon[74356]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 15 10:38:11 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:11 compute-0 ceph-mon[74356]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 15 10:38:11 compute-0 ceph-mon[74356]: Updating compute-1:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:38:11 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:11 compute-0 ceph-mon[74356]: 4.14 deep-scrub starts
Dec 15 10:38:11 compute-0 ceph-mon[74356]: 4.14 deep-scrub ok
Dec 15 10:38:11 compute-0 ceph-mon[74356]: Updating compute-2:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:38:11 compute-0 ceph-mon[74356]: 3.11 deep-scrub starts
Dec 15 10:38:11 compute-0 ceph-mon[74356]: 3.11 deep-scrub ok
Dec 15 10:38:11 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:11 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:11 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:11 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:11 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:11 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:11 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:11 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Dec 15 10:38:11 compute-0 dazzling_blackburn[92169]: ERROR: invalid flag --daemon-type
Dec 15 10:38:11 compute-0 systemd[1]: libpod-a4fe3b0c9eacda9c564cc90700d090bbb19febe4ac0e89d9b78af782809513c9.scope: Deactivated successfully.
Dec 15 10:38:11 compute-0 podman[92154]: 2025-12-15 10:38:11.390973568 +0000 UTC m=+0.180122096 container died a4fe3b0c9eacda9c564cc90700d090bbb19febe4ac0e89d9b78af782809513c9 (image=quay.io/ceph/ceph:v19, name=dazzling_blackburn, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec 15 10:38:11 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e43 e43: 3 total, 2 up, 3 in
Dec 15 10:38:11 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 2 up, 3 in
Dec 15 10:38:11 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 15 10:38:11 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:11 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 15 10:38:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-8dbeb297ae912df67b81c0ec57c0379f6a0d831f7ea1c472b0dc36c5b912e21b-merged.mount: Deactivated successfully.
Dec 15 10:38:11 compute-0 podman[92154]: 2025-12-15 10:38:11.429768827 +0000 UTC m=+0.218917385 container remove a4fe3b0c9eacda9c564cc90700d090bbb19febe4ac0e89d9b78af782809513c9 (image=quay.io/ceph/ceph:v19, name=dazzling_blackburn, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 15 10:38:11 compute-0 systemd[1]: libpod-conmon-a4fe3b0c9eacda9c564cc90700d090bbb19febe4ac0e89d9b78af782809513c9.scope: Deactivated successfully.
Dec 15 10:38:11 compute-0 sudo[92151]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:11 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.difmqj(active, since 7s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:38:11 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Dec 15 10:38:11 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Dec 15 10:38:11 compute-0 sshd-session[90450]: Connection closed by invalid user  115.190.87.147 port 58588 [preauth]
Dec 15 10:38:12 compute-0 ceph-mon[74356]: pgmap v9: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:38:12 compute-0 ceph-mon[74356]: Deploying daemon node-exporter.compute-1 on compute-1
Dec 15 10:38:12 compute-0 ceph-mon[74356]: osdmap e43: 3 total, 2 up, 3 in
Dec 15 10:38:12 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:12 compute-0 ceph-mon[74356]: mgrmap e27: compute-0.difmqj(active, since 7s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:38:12 compute-0 ceph-mon[74356]: 5.12 scrub starts
Dec 15 10:38:12 compute-0 ceph-mon[74356]: 5.12 scrub ok
Dec 15 10:38:12 compute-0 ceph-mon[74356]: 4.9 scrub starts
Dec 15 10:38:12 compute-0 ceph-mon[74356]: 4.9 scrub ok
Dec 15 10:38:12 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Dec 15 10:38:12 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Dec 15 10:38:13 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v11: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Dec 15 10:38:13 compute-0 ceph-mon[74356]: 4.12 scrub starts
Dec 15 10:38:13 compute-0 ceph-mon[74356]: 4.12 scrub ok
Dec 15 10:38:13 compute-0 ceph-mon[74356]: 3.f deep-scrub starts
Dec 15 10:38:13 compute-0 ceph-mon[74356]: 3.f deep-scrub ok
Dec 15 10:38:13 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Dec 15 10:38:13 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Dec 15 10:38:14 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:38:14 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:14 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:38:14 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:14 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec 15 10:38:14 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:14 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Dec 15 10:38:14 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Dec 15 10:38:14 compute-0 ceph-mon[74356]: pgmap v11: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Dec 15 10:38:14 compute-0 ceph-mon[74356]: 5.13 scrub starts
Dec 15 10:38:14 compute-0 ceph-mon[74356]: 5.13 scrub ok
Dec 15 10:38:14 compute-0 ceph-mon[74356]: 4.8 deep-scrub starts
Dec 15 10:38:14 compute-0 ceph-mon[74356]: 4.8 deep-scrub ok
Dec 15 10:38:14 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:14 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:14 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:14 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Dec 15 10:38:14 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Dec 15 10:38:15 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v12: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Dec 15 10:38:15 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Dec 15 10:38:15 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Dec 15 10:38:15 compute-0 ceph-mon[74356]: Deploying daemon node-exporter.compute-2 on compute-2
Dec 15 10:38:15 compute-0 ceph-mon[74356]: 4.11 scrub starts
Dec 15 10:38:15 compute-0 ceph-mon[74356]: 4.11 scrub ok
Dec 15 10:38:15 compute-0 ceph-mon[74356]: 5.9 scrub starts
Dec 15 10:38:15 compute-0 ceph-mon[74356]: 5.9 scrub ok
Dec 15 10:38:16 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:38:16 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Dec 15 10:38:16 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Dec 15 10:38:16 compute-0 ceph-mon[74356]: pgmap v12: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Dec 15 10:38:16 compute-0 ceph-mon[74356]: 4.10 scrub starts
Dec 15 10:38:16 compute-0 ceph-mon[74356]: 4.10 scrub ok
Dec 15 10:38:16 compute-0 ceph-mon[74356]: 4.d scrub starts
Dec 15 10:38:16 compute-0 ceph-mon[74356]: 4.d scrub ok
Dec 15 10:38:17 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v13: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 0 B/s wr, 9 op/s
Dec 15 10:38:17 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:38:17 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:17 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:38:17 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:17 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec 15 10:38:17 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:17 compute-0 ceph-mgr[74651]: [progress INFO root] complete: finished ev 86cf0de5-a14d-4462-9309-460251a6dce3 (Updating node-exporter deployment (+2 -> 3))
Dec 15 10:38:17 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event 86cf0de5-a14d-4462-9309-460251a6dce3 (Updating node-exporter deployment (+2 -> 3)) in 6 seconds
Dec 15 10:38:17 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec 15 10:38:17 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:17 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 15 10:38:17 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 15 10:38:17 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 15 10:38:17 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 15 10:38:17 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:38:17 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:17 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 15 10:38:17 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 15 10:38:17 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:38:17 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:17 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Dec 15 10:38:17 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Dec 15 10:38:17 compute-0 sudo[92202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:38:17 compute-0 sudo[92202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:17 compute-0 sudo[92202]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:17 compute-0 sudo[92227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 77365f67-614e-5a8d-b658-640395550c79 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 15 10:38:17 compute-0 sudo[92227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:17 compute-0 ceph-mon[74356]: 3.17 scrub starts
Dec 15 10:38:17 compute-0 ceph-mon[74356]: 3.17 scrub ok
Dec 15 10:38:17 compute-0 ceph-mon[74356]: 3.c scrub starts
Dec 15 10:38:17 compute-0 ceph-mon[74356]: 3.c scrub ok
Dec 15 10:38:17 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:17 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:17 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:17 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:17 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 15 10:38:17 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 15 10:38:17 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:17 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 15 10:38:17 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:18 compute-0 podman[92293]: 2025-12-15 10:38:18.095987972 +0000 UTC m=+0.051468916 container create b89450185e81cfd0db4af05f9fdcfb9df937bbbbfba9fe7630fe764763978ffc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_northcutt, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:38:18 compute-0 systemd[1]: Started libpod-conmon-b89450185e81cfd0db4af05f9fdcfb9df937bbbbfba9fe7630fe764763978ffc.scope.
Dec 15 10:38:18 compute-0 podman[92293]: 2025-12-15 10:38:18.070095634 +0000 UTC m=+0.025576618 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:38:18 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:38:18 compute-0 podman[92293]: 2025-12-15 10:38:18.199612462 +0000 UTC m=+0.155093436 container init b89450185e81cfd0db4af05f9fdcfb9df937bbbbfba9fe7630fe764763978ffc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_northcutt, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:38:18 compute-0 podman[92293]: 2025-12-15 10:38:18.204860234 +0000 UTC m=+0.160341168 container start b89450185e81cfd0db4af05f9fdcfb9df937bbbbfba9fe7630fe764763978ffc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_northcutt, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 15 10:38:18 compute-0 podman[92293]: 2025-12-15 10:38:18.208515864 +0000 UTC m=+0.163996868 container attach b89450185e81cfd0db4af05f9fdcfb9df937bbbbfba9fe7630fe764763978ffc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_northcutt, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:38:18 compute-0 loving_northcutt[92309]: 167 167
Dec 15 10:38:18 compute-0 systemd[1]: libpod-b89450185e81cfd0db4af05f9fdcfb9df937bbbbfba9fe7630fe764763978ffc.scope: Deactivated successfully.
Dec 15 10:38:18 compute-0 podman[92293]: 2025-12-15 10:38:18.211965947 +0000 UTC m=+0.167446891 container died b89450185e81cfd0db4af05f9fdcfb9df937bbbbfba9fe7630fe764763978ffc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 15 10:38:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ba727c1608db4207d44fd290e781454450f8343d6be4dc42267e953a78b2fa2-merged.mount: Deactivated successfully.
Dec 15 10:38:18 compute-0 podman[92293]: 2025-12-15 10:38:18.255062277 +0000 UTC m=+0.210543191 container remove b89450185e81cfd0db4af05f9fdcfb9df937bbbbfba9fe7630fe764763978ffc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_northcutt, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 15 10:38:18 compute-0 systemd[1]: libpod-conmon-b89450185e81cfd0db4af05f9fdcfb9df937bbbbfba9fe7630fe764763978ffc.scope: Deactivated successfully.
Dec 15 10:38:18 compute-0 podman[92331]: 2025-12-15 10:38:18.412607282 +0000 UTC m=+0.042590275 container create a5b40de765c43ec67c41a7733a0f91f745e16c6cf87529811d3810395f59080c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_lalande, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:38:18 compute-0 systemd[1]: Started libpod-conmon-a5b40de765c43ec67c41a7733a0f91f745e16c6cf87529811d3810395f59080c.scope.
Dec 15 10:38:18 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df78db289ab8d2bd87b9ae3fa83c2b0626a1a6f6d6f19d571965673176ce86e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df78db289ab8d2bd87b9ae3fa83c2b0626a1a6f6d6f19d571965673176ce86e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df78db289ab8d2bd87b9ae3fa83c2b0626a1a6f6d6f19d571965673176ce86e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df78db289ab8d2bd87b9ae3fa83c2b0626a1a6f6d6f19d571965673176ce86e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df78db289ab8d2bd87b9ae3fa83c2b0626a1a6f6d6f19d571965673176ce86e2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:18 compute-0 podman[92331]: 2025-12-15 10:38:18.46937811 +0000 UTC m=+0.099361153 container init a5b40de765c43ec67c41a7733a0f91f745e16c6cf87529811d3810395f59080c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_lalande, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 15 10:38:18 compute-0 podman[92331]: 2025-12-15 10:38:18.478944342 +0000 UTC m=+0.108927355 container start a5b40de765c43ec67c41a7733a0f91f745e16c6cf87529811d3810395f59080c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:38:18 compute-0 podman[92331]: 2025-12-15 10:38:18.483969757 +0000 UTC m=+0.113952750 container attach a5b40de765c43ec67c41a7733a0f91f745e16c6cf87529811d3810395f59080c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_lalande, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 15 10:38:18 compute-0 podman[92331]: 2025-12-15 10:38:18.392893327 +0000 UTC m=+0.022876330 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:38:18 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Dec 15 10:38:18 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Dec 15 10:38:18 compute-0 loving_lalande[92347]: --> passed data devices: 0 physical, 1 LVM
Dec 15 10:38:18 compute-0 loving_lalande[92347]: --> All data devices are unavailable
Dec 15 10:38:18 compute-0 systemd[1]: libpod-a5b40de765c43ec67c41a7733a0f91f745e16c6cf87529811d3810395f59080c.scope: Deactivated successfully.
Dec 15 10:38:18 compute-0 podman[92331]: 2025-12-15 10:38:18.818085979 +0000 UTC m=+0.448068972 container died a5b40de765c43ec67c41a7733a0f91f745e16c6cf87529811d3810395f59080c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_lalande, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 15 10:38:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-df78db289ab8d2bd87b9ae3fa83c2b0626a1a6f6d6f19d571965673176ce86e2-merged.mount: Deactivated successfully.
Dec 15 10:38:18 compute-0 podman[92331]: 2025-12-15 10:38:18.861158889 +0000 UTC m=+0.491141902 container remove a5b40de765c43ec67c41a7733a0f91f745e16c6cf87529811d3810395f59080c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_lalande, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:38:18 compute-0 systemd[1]: libpod-conmon-a5b40de765c43ec67c41a7733a0f91f745e16c6cf87529811d3810395f59080c.scope: Deactivated successfully.
Dec 15 10:38:18 compute-0 sudo[92227]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:18 compute-0 sudo[92373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:38:18 compute-0 sudo[92373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:18 compute-0 sudo[92373]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:19 compute-0 sudo[92398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 77365f67-614e-5a8d-b658-640395550c79 -- lvm list --format json
Dec 15 10:38:19 compute-0 sudo[92398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:19 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v14: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s
Dec 15 10:38:19 compute-0 ceph-mon[74356]: pgmap v13: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 0 B/s wr, 9 op/s
Dec 15 10:38:19 compute-0 ceph-mon[74356]: 3.18 scrub starts
Dec 15 10:38:19 compute-0 ceph-mon[74356]: 3.18 scrub ok
Dec 15 10:38:19 compute-0 ceph-mon[74356]: 5.4 scrub starts
Dec 15 10:38:19 compute-0 ceph-mon[74356]: 5.4 scrub ok
Dec 15 10:38:19 compute-0 podman[92463]: 2025-12-15 10:38:19.413608095 +0000 UTC m=+0.045967554 container create b9454360481cd19337c683239b32d7ba6a5afb1971c9a5a5ace16bf1772adf33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_feynman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 15 10:38:19 compute-0 systemd[1]: Started libpod-conmon-b9454360481cd19337c683239b32d7ba6a5afb1971c9a5a5ace16bf1772adf33.scope.
Dec 15 10:38:19 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:38:19 compute-0 podman[92463]: 2025-12-15 10:38:19.390282963 +0000 UTC m=+0.022642422 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:38:19 compute-0 podman[92463]: 2025-12-15 10:38:19.496169817 +0000 UTC m=+0.128529256 container init b9454360481cd19337c683239b32d7ba6a5afb1971c9a5a5ace16bf1772adf33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:38:19 compute-0 podman[92463]: 2025-12-15 10:38:19.502762823 +0000 UTC m=+0.135122272 container start b9454360481cd19337c683239b32d7ba6a5afb1971c9a5a5ace16bf1772adf33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_feynman, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default)
Dec 15 10:38:19 compute-0 strange_feynman[92479]: 167 167
Dec 15 10:38:19 compute-0 podman[92463]: 2025-12-15 10:38:19.506415552 +0000 UTC m=+0.138774971 container attach b9454360481cd19337c683239b32d7ba6a5afb1971c9a5a5ace16bf1772adf33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_feynman, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True)
Dec 15 10:38:19 compute-0 systemd[1]: libpod-b9454360481cd19337c683239b32d7ba6a5afb1971c9a5a5ace16bf1772adf33.scope: Deactivated successfully.
Dec 15 10:38:19 compute-0 podman[92463]: 2025-12-15 10:38:19.507118145 +0000 UTC m=+0.139477584 container died b9454360481cd19337c683239b32d7ba6a5afb1971c9a5a5ace16bf1772adf33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:38:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-90d505b7644667cd34dd5c282e46576c458627b0823c44fb8d4f6e3422930365-merged.mount: Deactivated successfully.
Dec 15 10:38:19 compute-0 podman[92463]: 2025-12-15 10:38:19.550286038 +0000 UTC m=+0.182645457 container remove b9454360481cd19337c683239b32d7ba6a5afb1971c9a5a5ace16bf1772adf33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_feynman, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 15 10:38:19 compute-0 systemd[1]: libpod-conmon-b9454360481cd19337c683239b32d7ba6a5afb1971c9a5a5ace16bf1772adf33.scope: Deactivated successfully.
Dec 15 10:38:19 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Dec 15 10:38:19 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Dec 15 10:38:19 compute-0 podman[92503]: 2025-12-15 10:38:19.699247941 +0000 UTC m=+0.046325476 container create 871f3e2d9fe4e82577fc862808b78b9e2219143b4e6138975f590f645181646c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_chaplygin, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 15 10:38:19 compute-0 systemd[1]: Started libpod-conmon-871f3e2d9fe4e82577fc862808b78b9e2219143b4e6138975f590f645181646c.scope.
Dec 15 10:38:19 compute-0 podman[92503]: 2025-12-15 10:38:19.675685741 +0000 UTC m=+0.022763226 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:38:19 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:38:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/201e8629016d8bd24d7cc6daf7809e6b8795cdef43522ababe31c07dd69897cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/201e8629016d8bd24d7cc6daf7809e6b8795cdef43522ababe31c07dd69897cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/201e8629016d8bd24d7cc6daf7809e6b8795cdef43522ababe31c07dd69897cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/201e8629016d8bd24d7cc6daf7809e6b8795cdef43522ababe31c07dd69897cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:19 compute-0 podman[92503]: 2025-12-15 10:38:19.803938147 +0000 UTC m=+0.151015562 container init 871f3e2d9fe4e82577fc862808b78b9e2219143b4e6138975f590f645181646c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_chaplygin, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:38:19 compute-0 podman[92503]: 2025-12-15 10:38:19.820141987 +0000 UTC m=+0.167219392 container start 871f3e2d9fe4e82577fc862808b78b9e2219143b4e6138975f590f645181646c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_chaplygin, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:38:19 compute-0 podman[92503]: 2025-12-15 10:38:19.823599901 +0000 UTC m=+0.170677306 container attach 871f3e2d9fe4e82577fc862808b78b9e2219143b4e6138975f590f645181646c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:38:20 compute-0 ceph-mgr[74651]: [progress INFO root] Writing back 11 completed events
Dec 15 10:38:20 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]: {
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:     "0": [
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:         {
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:             "devices": [
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:                 "/dev/loop3"
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:             ],
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:             "lv_name": "ceph_lv0",
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:             "lv_size": "21470642176",
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MCSOGR-6V69-UVGq-8FpK-DBjb-LyDE-FLUPxJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=77365f67-614e-5a8d-b658-640395550c79,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7690eca0-4e87-4157-a045-1912448da925,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:             "lv_uuid": "MCSOGR-6V69-UVGq-8FpK-DBjb-LyDE-FLUPxJ",
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:             "name": "ceph_lv0",
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:             "tags": {
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:                 "ceph.block_uuid": "MCSOGR-6V69-UVGq-8FpK-DBjb-LyDE-FLUPxJ",
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:                 "ceph.cephx_lockbox_secret": "",
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:                 "ceph.cluster_fsid": "77365f67-614e-5a8d-b658-640395550c79",
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:                 "ceph.cluster_name": "ceph",
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:                 "ceph.crush_device_class": "",
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:                 "ceph.encrypted": "0",
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:                 "ceph.osd_fsid": "7690eca0-4e87-4157-a045-1912448da925",
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:                 "ceph.osd_id": "0",
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:                 "ceph.type": "block",
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:                 "ceph.vdo": "0",
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:                 "ceph.with_tpm": "0"
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:             },
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:             "type": "block",
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:             "vg_name": "ceph_vg0"
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:         }
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]:     ]
Dec 15 10:38:20 compute-0 practical_chaplygin[92520]: }
Dec 15 10:38:20 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:20 compute-0 systemd[1]: libpod-871f3e2d9fe4e82577fc862808b78b9e2219143b4e6138975f590f645181646c.scope: Deactivated successfully.
Dec 15 10:38:20 compute-0 podman[92503]: 2025-12-15 10:38:20.139117594 +0000 UTC m=+0.486195029 container died 871f3e2d9fe4e82577fc862808b78b9e2219143b4e6138975f590f645181646c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_chaplygin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:38:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-201e8629016d8bd24d7cc6daf7809e6b8795cdef43522ababe31c07dd69897cb-merged.mount: Deactivated successfully.
Dec 15 10:38:20 compute-0 podman[92503]: 2025-12-15 10:38:20.176843959 +0000 UTC m=+0.523921354 container remove 871f3e2d9fe4e82577fc862808b78b9e2219143b4e6138975f590f645181646c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_chaplygin, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:38:20 compute-0 systemd[1]: libpod-conmon-871f3e2d9fe4e82577fc862808b78b9e2219143b4e6138975f590f645181646c.scope: Deactivated successfully.
Dec 15 10:38:20 compute-0 ceph-mon[74356]: 5.1e scrub starts
Dec 15 10:38:20 compute-0 ceph-mon[74356]: 5.1e scrub ok
Dec 15 10:38:20 compute-0 ceph-mon[74356]: 3.a scrub starts
Dec 15 10:38:20 compute-0 ceph-mon[74356]: 3.a scrub ok
Dec 15 10:38:20 compute-0 ceph-mon[74356]: pgmap v14: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s
Dec 15 10:38:20 compute-0 ceph-mon[74356]: 4.a scrub starts
Dec 15 10:38:20 compute-0 ceph-mon[74356]: 4.a scrub ok
Dec 15 10:38:20 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:20 compute-0 sudo[92398]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:20 compute-0 sudo[92541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:38:20 compute-0 sudo[92541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:20 compute-0 sudo[92541]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:20 compute-0 sudo[92566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 77365f67-614e-5a8d-b658-640395550c79 -- raw list --format json
Dec 15 10:38:20 compute-0 sudo[92566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:20 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Dec 15 10:38:20 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Dec 15 10:38:20 compute-0 podman[92631]: 2025-12-15 10:38:20.799578275 +0000 UTC m=+0.052976604 container create 7834a3e432f7617bf89254c19f7ead8cf78cf6a2fde31a3fb0eefb9d2d78f0c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:38:20 compute-0 systemd[1]: Started libpod-conmon-7834a3e432f7617bf89254c19f7ead8cf78cf6a2fde31a3fb0eefb9d2d78f0c8.scope.
Dec 15 10:38:20 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:38:20 compute-0 podman[92631]: 2025-12-15 10:38:20.776721897 +0000 UTC m=+0.030120246 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:38:20 compute-0 podman[92631]: 2025-12-15 10:38:20.888182245 +0000 UTC m=+0.141580574 container init 7834a3e432f7617bf89254c19f7ead8cf78cf6a2fde31a3fb0eefb9d2d78f0c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:38:20 compute-0 podman[92631]: 2025-12-15 10:38:20.894302574 +0000 UTC m=+0.147700883 container start 7834a3e432f7617bf89254c19f7ead8cf78cf6a2fde31a3fb0eefb9d2d78f0c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_maxwell, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:38:20 compute-0 podman[92631]: 2025-12-15 10:38:20.898206852 +0000 UTC m=+0.151605211 container attach 7834a3e432f7617bf89254c19f7ead8cf78cf6a2fde31a3fb0eefb9d2d78f0c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_maxwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 15 10:38:20 compute-0 sleepy_maxwell[92648]: 167 167
Dec 15 10:38:20 compute-0 systemd[1]: libpod-7834a3e432f7617bf89254c19f7ead8cf78cf6a2fde31a3fb0eefb9d2d78f0c8.scope: Deactivated successfully.
Dec 15 10:38:20 compute-0 podman[92631]: 2025-12-15 10:38:20.900867279 +0000 UTC m=+0.154265608 container died 7834a3e432f7617bf89254c19f7ead8cf78cf6a2fde31a3fb0eefb9d2d78f0c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 15 10:38:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-00409850c75dc61c01130d3217d1443eb0a3f0a3d3d9e1d3aa93dd7eeaffbe95-merged.mount: Deactivated successfully.
Dec 15 10:38:20 compute-0 podman[92631]: 2025-12-15 10:38:20.946849644 +0000 UTC m=+0.200247953 container remove 7834a3e432f7617bf89254c19f7ead8cf78cf6a2fde31a3fb0eefb9d2d78f0c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_maxwell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:38:20 compute-0 systemd[1]: libpod-conmon-7834a3e432f7617bf89254c19f7ead8cf78cf6a2fde31a3fb0eefb9d2d78f0c8.scope: Deactivated successfully.
Dec 15 10:38:21 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v15: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Dec 15 10:38:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec 15 10:38:21 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec 15 10:38:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:38:21 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:21 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Dec 15 10:38:21 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Dec 15 10:38:21 compute-0 podman[92673]: 2025-12-15 10:38:21.141377649 +0000 UTC m=+0.037682504 container create e1c9fddb40f6f412bb12a8027edef61687e147a7ea63966c9cb8a1090fe1e8d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:38:21 compute-0 systemd[1]: Started libpod-conmon-e1c9fddb40f6f412bb12a8027edef61687e147a7ea63966c9cb8a1090fe1e8d3.scope.
Dec 15 10:38:21 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:38:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c79379d85eb8aa0a75456707a4b7a6bf1e393bda4cccec8dd89b013019a26b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c79379d85eb8aa0a75456707a4b7a6bf1e393bda4cccec8dd89b013019a26b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c79379d85eb8aa0a75456707a4b7a6bf1e393bda4cccec8dd89b013019a26b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c79379d85eb8aa0a75456707a4b7a6bf1e393bda4cccec8dd89b013019a26b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:21 compute-0 podman[92673]: 2025-12-15 10:38:21.125930144 +0000 UTC m=+0.022235019 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:38:21 compute-0 ceph-mon[74356]: 3.19 scrub starts
Dec 15 10:38:21 compute-0 ceph-mon[74356]: 3.19 scrub ok
Dec 15 10:38:21 compute-0 ceph-mon[74356]: 3.e scrub starts
Dec 15 10:38:21 compute-0 ceph-mon[74356]: 3.e scrub ok
Dec 15 10:38:21 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec 15 10:38:21 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:21 compute-0 podman[92673]: 2025-12-15 10:38:21.235562271 +0000 UTC m=+0.131867176 container init e1c9fddb40f6f412bb12a8027edef61687e147a7ea63966c9cb8a1090fe1e8d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_einstein, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:38:21 compute-0 podman[92673]: 2025-12-15 10:38:21.246931463 +0000 UTC m=+0.143236328 container start e1c9fddb40f6f412bb12a8027edef61687e147a7ea63966c9cb8a1090fe1e8d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec 15 10:38:21 compute-0 podman[92673]: 2025-12-15 10:38:21.25052335 +0000 UTC m=+0.146828225 container attach e1c9fddb40f6f412bb12a8027edef61687e147a7ea63966c9cb8a1090fe1e8d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 15 10:38:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:38:21 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Dec 15 10:38:21 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Dec 15 10:38:21 compute-0 sudo[92739]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsljzsdhyzqftdpjseovzivfbcgiabpq ; /usr/bin/python3'
Dec 15 10:38:21 compute-0 sudo[92739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:38:21 compute-0 python3[92745]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:38:21 compute-0 podman[92775]: 2025-12-15 10:38:21.758330816 +0000 UTC m=+0.055009051 container create 2a69220ef5456d38d2b730eadff7859d815ed3b74f9b9535ac13c53afcfd213f (image=quay.io/ceph/ceph:v19, name=unruffled_brown, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Dec 15 10:38:21 compute-0 systemd[1]: Started libpod-conmon-2a69220ef5456d38d2b730eadff7859d815ed3b74f9b9535ac13c53afcfd213f.scope.
Dec 15 10:38:21 compute-0 podman[92775]: 2025-12-15 10:38:21.726410662 +0000 UTC m=+0.023088907 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:38:21 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:38:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13c9aa9a26b2ddf213961cfa56407e55e930e61bbc49af98d051c5187f777fb3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13c9aa9a26b2ddf213961cfa56407e55e930e61bbc49af98d051c5187f777fb3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:21 compute-0 lvm[92806]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 15 10:38:21 compute-0 lvm[92806]: VG ceph_vg0 finished
Dec 15 10:38:21 compute-0 podman[92775]: 2025-12-15 10:38:21.886238741 +0000 UTC m=+0.182916996 container init 2a69220ef5456d38d2b730eadff7859d815ed3b74f9b9535ac13c53afcfd213f (image=quay.io/ceph/ceph:v19, name=unruffled_brown, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 15 10:38:21 compute-0 podman[92775]: 2025-12-15 10:38:21.894265114 +0000 UTC m=+0.190943379 container start 2a69220ef5456d38d2b730eadff7859d815ed3b74f9b9535ac13c53afcfd213f (image=quay.io/ceph/ceph:v19, name=unruffled_brown, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 15 10:38:21 compute-0 podman[92775]: 2025-12-15 10:38:21.901041376 +0000 UTC m=+0.197719621 container attach 2a69220ef5456d38d2b730eadff7859d815ed3b74f9b9535ac13c53afcfd213f (image=quay.io/ceph/ceph:v19, name=unruffled_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:38:21 compute-0 ecstatic_einstein[92687]: {}
Dec 15 10:38:21 compute-0 unruffled_brown[92801]: ERROR: invalid flag --daemon-type
Dec 15 10:38:21 compute-0 systemd[1]: libpod-2a69220ef5456d38d2b730eadff7859d815ed3b74f9b9535ac13c53afcfd213f.scope: Deactivated successfully.
Dec 15 10:38:21 compute-0 conmon[92801]: conmon 2a69220ef5456d38d2b7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2a69220ef5456d38d2b730eadff7859d815ed3b74f9b9535ac13c53afcfd213f.scope/container/memory.events
Dec 15 10:38:21 compute-0 podman[92775]: 2025-12-15 10:38:21.945905254 +0000 UTC m=+0.242583479 container died 2a69220ef5456d38d2b730eadff7859d815ed3b74f9b9535ac13c53afcfd213f (image=quay.io/ceph/ceph:v19, name=unruffled_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:38:21 compute-0 systemd[1]: libpod-e1c9fddb40f6f412bb12a8027edef61687e147a7ea63966c9cb8a1090fe1e8d3.scope: Deactivated successfully.
Dec 15 10:38:21 compute-0 systemd[1]: libpod-e1c9fddb40f6f412bb12a8027edef61687e147a7ea63966c9cb8a1090fe1e8d3.scope: Consumed 1.040s CPU time.
Dec 15 10:38:21 compute-0 conmon[92687]: conmon e1c9fddb40f6f412bb12 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e1c9fddb40f6f412bb12a8027edef61687e147a7ea63966c9cb8a1090fe1e8d3.scope/container/memory.events
Dec 15 10:38:21 compute-0 podman[92673]: 2025-12-15 10:38:21.974837231 +0000 UTC m=+0.871142096 container died e1c9fddb40f6f412bb12a8027edef61687e147a7ea63966c9cb8a1090fe1e8d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_einstein, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 15 10:38:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-13c9aa9a26b2ddf213961cfa56407e55e930e61bbc49af98d051c5187f777fb3-merged.mount: Deactivated successfully.
Dec 15 10:38:22 compute-0 podman[92775]: 2025-12-15 10:38:22.004097048 +0000 UTC m=+0.300775273 container remove 2a69220ef5456d38d2b730eadff7859d815ed3b74f9b9535ac13c53afcfd213f (image=quay.io/ceph/ceph:v19, name=unruffled_brown, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 15 10:38:22 compute-0 systemd[1]: libpod-conmon-2a69220ef5456d38d2b730eadff7859d815ed3b74f9b9535ac13c53afcfd213f.scope: Deactivated successfully.
Dec 15 10:38:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-71c79379d85eb8aa0a75456707a4b7a6bf1e393bda4cccec8dd89b013019a26b-merged.mount: Deactivated successfully.
Dec 15 10:38:22 compute-0 sudo[92739]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:22 compute-0 podman[92673]: 2025-12-15 10:38:22.032465686 +0000 UTC m=+0.928770541 container remove e1c9fddb40f6f412bb12a8027edef61687e147a7ea63966c9cb8a1090fe1e8d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_einstein, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 15 10:38:22 compute-0 systemd[1]: libpod-conmon-e1c9fddb40f6f412bb12a8027edef61687e147a7ea63966c9cb8a1090fe1e8d3.scope: Deactivated successfully.
Dec 15 10:38:22 compute-0 sudo[92566]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:22 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:38:22 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:22 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:38:22 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:22 compute-0 ceph-mon[74356]: 4.1e scrub starts
Dec 15 10:38:22 compute-0 ceph-mon[74356]: 4.1e scrub ok
Dec 15 10:38:22 compute-0 ceph-mon[74356]: pgmap v15: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Dec 15 10:38:22 compute-0 ceph-mon[74356]: Deploying daemon osd.2 on compute-2
Dec 15 10:38:22 compute-0 ceph-mon[74356]: 5.7 scrub starts
Dec 15 10:38:22 compute-0 ceph-mon[74356]: 5.7 scrub ok
Dec 15 10:38:22 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:22 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:22 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Dec 15 10:38:22 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Dec 15 10:38:23 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v16: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:38:23 compute-0 ceph-mon[74356]: 2.15 scrub starts
Dec 15 10:38:23 compute-0 ceph-mon[74356]: 2.15 scrub ok
Dec 15 10:38:23 compute-0 ceph-mon[74356]: 5.2 deep-scrub starts
Dec 15 10:38:23 compute-0 ceph-mon[74356]: 5.2 deep-scrub ok
Dec 15 10:38:23 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Dec 15 10:38:23 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Dec 15 10:38:24 compute-0 ceph-mon[74356]: 2.10 scrub starts
Dec 15 10:38:24 compute-0 ceph-mon[74356]: 2.10 scrub ok
Dec 15 10:38:24 compute-0 ceph-mon[74356]: pgmap v16: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:38:24 compute-0 ceph-mon[74356]: 4.5 deep-scrub starts
Dec 15 10:38:24 compute-0 ceph-mon[74356]: 4.5 deep-scrub ok
Dec 15 10:38:24 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 2.e scrub starts
Dec 15 10:38:24 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 2.e scrub ok
Dec 15 10:38:25 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v17: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:38:25 compute-0 ceph-mon[74356]: 2.19 scrub starts
Dec 15 10:38:25 compute-0 ceph-mon[74356]: 2.19 scrub ok
Dec 15 10:38:25 compute-0 ceph-mon[74356]: 5.1 scrub starts
Dec 15 10:38:25 compute-0 ceph-mon[74356]: 5.1 scrub ok
Dec 15 10:38:25 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:38:25 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:25 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:38:25 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:25 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Dec 15 10:38:25 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Dec 15 10:38:26 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:38:26 compute-0 ceph-mon[74356]: 2.e scrub starts
Dec 15 10:38:26 compute-0 ceph-mon[74356]: 2.e scrub ok
Dec 15 10:38:26 compute-0 ceph-mon[74356]: pgmap v17: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:38:26 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:26 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:26 compute-0 ceph-mon[74356]: 3.d scrub starts
Dec 15 10:38:26 compute-0 ceph-mon[74356]: 3.d scrub ok
Dec 15 10:38:26 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 2.d scrub starts
Dec 15 10:38:26 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 2.d scrub ok
Dec 15 10:38:27 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v18: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:38:27 compute-0 ceph-mon[74356]: 2.13 scrub starts
Dec 15 10:38:27 compute-0 ceph-mon[74356]: 2.13 scrub ok
Dec 15 10:38:27 compute-0 ceph-mon[74356]: 4.e scrub starts
Dec 15 10:38:27 compute-0 ceph-mon[74356]: 4.e scrub ok
Dec 15 10:38:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:38:27 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:38:27 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:27 compute-0 ceph-mgr[74651]: [progress INFO root] update: starting ev 38ad733d-d823-4bea-97f6-5b493b2c4a3d (Updating rgw.rgw deployment (+3 -> 3))
Dec 15 10:38:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.jevpck", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 15 10:38:27 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.jevpck", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 15 10:38:27 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.jevpck", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 15 10:38:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec 15 10:38:27 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 2.c scrub starts
Dec 15 10:38:27 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:38:27 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:27 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.jevpck on compute-2
Dec 15 10:38:27 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.jevpck on compute-2
Dec 15 10:38:27 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 2.c scrub ok
Dec 15 10:38:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Dec 15 10:38:27 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 15 10:38:28 compute-0 ceph-mon[74356]: 2.d scrub starts
Dec 15 10:38:28 compute-0 ceph-mon[74356]: 2.d scrub ok
Dec 15 10:38:28 compute-0 ceph-mon[74356]: pgmap v18: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:38:28 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:28 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:28 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.jevpck", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 15 10:38:28 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.jevpck", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 15 10:38:28 compute-0 ceph-mon[74356]: 2.c scrub starts
Dec 15 10:38:28 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:28 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:28 compute-0 ceph-mon[74356]: Deploying daemon rgw.rgw.compute-2.jevpck on compute-2
Dec 15 10:38:28 compute-0 ceph-mon[74356]: 2.c scrub ok
Dec 15 10:38:28 compute-0 ceph-mon[74356]: from='osd.2 [v2:192.168.122.102:6800/1989266060,v1:192.168.122.102:6801/1989266060]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 15 10:38:28 compute-0 ceph-mon[74356]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 15 10:38:28 compute-0 ceph-mon[74356]: 5.f scrub starts
Dec 15 10:38:28 compute-0 ceph-mon[74356]: 5.f scrub ok
Dec 15 10:38:28 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Dec 15 10:38:28 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Dec 15 10:38:28 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 15 10:38:28 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e44 e44: 3 total, 2 up, 3 in
Dec 15 10:38:28 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Dec 15 10:38:28 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 2 up, 3 in
Dec 15 10:38:28 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 15 10:38:28 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:28 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 15 10:38:28 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Dec 15 10:38:28 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec 15 10:38:28 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e44 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Dec 15 10:38:29 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v20: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:38:29 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:38:29 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:29 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:38:29 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:29 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 15 10:38:29 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:29 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.aiqqke", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 15 10:38:29 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.aiqqke", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 15 10:38:29 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.aiqqke", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 15 10:38:29 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec 15 10:38:29 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:29 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:38:29 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:29 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.aiqqke on compute-1
Dec 15 10:38:29 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.aiqqke on compute-1
Dec 15 10:38:29 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Dec 15 10:38:29 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Dec 15 10:38:29 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e45 e45: 3 total, 2 up, 3 in
Dec 15 10:38:29 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 2 up, 3 in
Dec 15 10:38:29 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 15 10:38:29 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:29 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[9.0( empty local-lis/les=0/0 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [0] r=0 lpr=45 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=45 pruub=13.235692978s) [] r=-1 lpr=45 pi=[27,45)/1 crt=0'0 mlcod 0'0 active pruub 122.462608337s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=45 pruub=13.235692978s) [] r=-1 lpr=45 pi=[27,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 122.462608337s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[4.19( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=15.517093658s) [] r=-1 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active pruub 124.744041443s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[6.1b( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=10.237725258s) [] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 active pruub 119.464668274s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[6.1b( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=10.237725258s) [] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 119.464668274s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[2.1b( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=15.164842606s) [] r=-1 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active pruub 124.391899109s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=15.517081261s) [] r=-1 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active pruub 124.744132996s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[2.1b( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=15.164842606s) [] r=-1 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.391899109s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=15.517081261s) [] r=-1 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.744132996s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[4.1d( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=15.516783714s) [] r=-1 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active pruub 124.743988037s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[3.8( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=45 pruub=13.237709999s) [] r=-1 lpr=45 pi=[27,45)/1 crt=0'0 mlcod 0'0 active pruub 122.464950562s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[4.1d( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=15.516783714s) [] r=-1 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.743988037s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[3.8( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=45 pruub=13.237709999s) [] r=-1 lpr=45 pi=[27,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 122.464950562s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=10.237888336s) [] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 active pruub 119.465293884s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[4.3( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=15.516679764s) [] r=-1 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active pruub 124.744125366s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[4.6( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=15.516557693s) [] r=-1 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active pruub 124.744010925s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[4.6( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=15.516557693s) [] r=-1 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.744010925s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=10.237888336s) [] r=-1 lpr=45 pi=[31,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 119.465293884s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[4.19( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=15.517093658s) [] r=-1 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.744041443s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[4.3( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=15.516679764s) [] r=-1 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.744125366s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=15.516363144s) [] r=-1 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active pruub 124.744010925s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[5.0( empty local-lis/les=29/30 n=0 ec=16/16 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=15.516193390s) [] r=-1 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active pruub 124.743858337s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=15.516363144s) [] r=-1 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.744010925s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[5.0( empty local-lis/les=29/30 n=0 ec=16/16 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=15.516193390s) [] r=-1 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.743858337s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[3.0( empty local-lis/les=27/28 n=0 ec=14/14 lis/c=27/27 les/c/f=28/28/0 sis=45 pruub=13.236982346s) [] r=-1 lpr=45 pi=[27,45)/1 crt=0'0 mlcod 0'0 active pruub 122.464904785s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[3.0( empty local-lis/les=27/28 n=0 ec=14/14 lis/c=27/27 les/c/f=28/28/0 sis=45 pruub=13.236982346s) [] r=-1 lpr=45 pi=[27,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 122.464904785s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[5.d( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=15.515646935s) [] r=-1 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active pruub 124.743820190s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[5.d( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=15.515646935s) [] r=-1 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.743820190s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[2.d( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=15.162179947s) [] r=-1 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active pruub 124.390380859s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[2.a( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=15.161992073s) [] r=-1 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active pruub 124.390220642s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[2.a( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=15.161992073s) [] r=-1 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.390220642s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[2.c( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=15.162145615s) [] r=-1 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active pruub 124.390449524s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[7.a( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=15.160378456s) [] r=-1 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active pruub 124.388687134s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[5.b( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=15.515380859s) [] r=-1 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active pruub 124.743713379s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[2.c( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=15.162145615s) [] r=-1 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.390449524s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[7.a( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=15.160378456s) [] r=-1 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.388687134s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[5.b( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=15.515380859s) [] r=-1 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.743713379s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[5.8( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=15.515268326s) [] r=-1 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active pruub 124.743728638s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[5.8( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=15.515268326s) [] r=-1 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.743728638s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[7.14( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=15.160088539s) [] r=-1 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active pruub 124.388648987s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[7.14( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=15.160088539s) [] r=-1 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.388648987s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=15.159987450s) [] r=-1 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active pruub 124.388671875s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=15.159987450s) [] r=-1 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.388671875s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=15.514788628s) [] r=-1 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active pruub 124.743522644s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[2.10( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=15.160030365s) [] r=-1 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active pruub 124.388641357s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[2.15( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=15.159869194s) [] r=-1 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active pruub 124.388641357s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=15.514788628s) [] r=-1 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.743522644s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[2.15( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=15.159869194s) [] r=-1 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.388641357s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[2.10( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=15.160030365s) [] r=-1 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.388641357s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[5.12( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=15.514822960s) [] r=-1 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active pruub 124.743659973s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[5.12( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=15.514822960s) [] r=-1 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.743659973s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[5.13( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=15.514556885s) [] r=-1 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active pruub 124.743537903s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[5.13( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=15.514556885s) [] r=-1 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.743537903s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[7.1d( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=15.153445244s) [] r=-1 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active pruub 124.382545471s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[7.1d( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=15.153445244s) [] r=-1 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.382545471s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 45 pg[2.d( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=15.162179947s) [] r=-1 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.390380859s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:29 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.18 deep-scrub starts
Dec 15 10:38:29 compute-0 ceph-mon[74356]: 7.1b scrub starts
Dec 15 10:38:29 compute-0 ceph-mon[74356]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 15 10:38:29 compute-0 ceph-mon[74356]: 7.1b scrub ok
Dec 15 10:38:29 compute-0 ceph-mon[74356]: osdmap e44: 3 total, 2 up, 3 in
Dec 15 10:38:29 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:29 compute-0 ceph-mon[74356]: from='osd.2 [v2:192.168.122.102:6800/1989266060,v1:192.168.122.102:6801/1989266060]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec 15 10:38:29 compute-0 ceph-mon[74356]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec 15 10:38:29 compute-0 ceph-mon[74356]: 4.1 scrub starts
Dec 15 10:38:29 compute-0 ceph-mon[74356]: 4.1 scrub ok
Dec 15 10:38:29 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:29 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:29 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:29 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.aiqqke", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 15 10:38:29 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.aiqqke", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 15 10:38:29 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:29 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:29 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1989266060; not ready for session (expect reconnect)
Dec 15 10:38:29 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 15 10:38:29 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:29 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.18 deep-scrub ok
Dec 15 10:38:29 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 15 10:38:29 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Dec 15 10:38:29 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.jevpck' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 15 10:38:30 compute-0 ceph-mgr[74651]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Dec 15 10:38:30 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Dec 15 10:38:30 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.jevpck' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 15 10:38:30 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e46 e46: 3 total, 2 up, 3 in
Dec 15 10:38:30 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 2 up, 3 in
Dec 15 10:38:30 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 15 10:38:30 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:30 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 15 10:38:30 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1989266060; not ready for session (expect reconnect)
Dec 15 10:38:30 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 15 10:38:30 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:30 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 15 10:38:30 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 46 pg[9.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [0] r=0 lpr=45 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:38:30 compute-0 ceph-mon[74356]: pgmap v20: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:38:30 compute-0 ceph-mon[74356]: Deploying daemon rgw.rgw.compute-1.aiqqke on compute-1
Dec 15 10:38:30 compute-0 ceph-mon[74356]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Dec 15 10:38:30 compute-0 ceph-mon[74356]: osdmap e45: 3 total, 2 up, 3 in
Dec 15 10:38:30 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:30 compute-0 ceph-mon[74356]: from='client.? 192.168.122.102:0/823392755' entity='client.rgw.rgw.compute-2.jevpck' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 15 10:38:30 compute-0 ceph-mon[74356]: 7.18 deep-scrub starts
Dec 15 10:38:30 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:30 compute-0 ceph-mon[74356]: 7.18 deep-scrub ok
Dec 15 10:38:30 compute-0 ceph-mon[74356]: from='client.? ' entity='client.rgw.rgw.compute-2.jevpck' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 15 10:38:30 compute-0 ceph-mon[74356]: 4.c scrub starts
Dec 15 10:38:30 compute-0 ceph-mon[74356]: 4.c scrub ok
Dec 15 10:38:30 compute-0 ceph-mon[74356]: from='client.? ' entity='client.rgw.rgw.compute-2.jevpck' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 15 10:38:30 compute-0 ceph-mon[74356]: osdmap e46: 3 total, 2 up, 3 in
Dec 15 10:38:30 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:30 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Dec 15 10:38:30 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Dec 15 10:38:31 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v23: 195 pgs: 1 unknown, 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:38:31 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:38:31 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:31 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:38:31 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:38:31 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:31 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 15 10:38:31 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:31 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ufugvl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 15 10:38:31 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ufugvl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 15 10:38:31 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ufugvl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 15 10:38:31 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec 15 10:38:31 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:31 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:38:31 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:31 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.ufugvl on compute-0
Dec 15 10:38:31 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.ufugvl on compute-0
Dec 15 10:38:31 compute-0 sudo[92854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:38:31 compute-0 sudo[92854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:31 compute-0 sudo[92854]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:31 compute-0 sudo[92879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:38:31 compute-0 sudo[92879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:31 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Dec 15 10:38:31 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e47 e47: 3 total, 2 up, 3 in
Dec 15 10:38:31 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 2 up, 3 in
Dec 15 10:38:31 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 15 10:38:31 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:31 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 15 10:38:31 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1989266060; not ready for session (expect reconnect)
Dec 15 10:38:31 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 15 10:38:31 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:31 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec 15 10:38:31 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.aiqqke' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 15 10:38:31 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 15 10:38:31 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Dec 15 10:38:31 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Dec 15 10:38:31 compute-0 ceph-mon[74356]: purged_snaps scrub starts
Dec 15 10:38:31 compute-0 ceph-mon[74356]: purged_snaps scrub ok
Dec 15 10:38:31 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:31 compute-0 ceph-mon[74356]: 7.1e scrub starts
Dec 15 10:38:31 compute-0 ceph-mon[74356]: 7.1e scrub ok
Dec 15 10:38:31 compute-0 ceph-mon[74356]: 5.1c scrub starts
Dec 15 10:38:31 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:31 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:31 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:31 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ufugvl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 15 10:38:31 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ufugvl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 15 10:38:31 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:31 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:31 compute-0 ceph-mon[74356]: osdmap e47: 3 total, 2 up, 3 in
Dec 15 10:38:31 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:31 compute-0 ceph-mon[74356]: from='client.? 192.168.122.101:0/342753226' entity='client.rgw.rgw.compute-1.aiqqke' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 15 10:38:31 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:31 compute-0 ceph-mon[74356]: from='client.? ' entity='client.rgw.rgw.compute-1.aiqqke' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 15 10:38:31 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec 15 10:38:31 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.jevpck' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 15 10:38:31 compute-0 podman[92944]: 2025-12-15 10:38:31.838658762 +0000 UTC m=+0.043328161 container create 2381da65c6d65da48123c7fe84cc6588a5c09b4eb16f94e8548d93721cb9036c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_ritchie, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 15 10:38:31 compute-0 systemd[1]: Started libpod-conmon-2381da65c6d65da48123c7fe84cc6588a5c09b4eb16f94e8548d93721cb9036c.scope.
Dec 15 10:38:31 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:38:31 compute-0 podman[92944]: 2025-12-15 10:38:31.818649438 +0000 UTC m=+0.023318877 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:38:31 compute-0 podman[92944]: 2025-12-15 10:38:31.919128069 +0000 UTC m=+0.123797498 container init 2381da65c6d65da48123c7fe84cc6588a5c09b4eb16f94e8548d93721cb9036c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_ritchie, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 15 10:38:31 compute-0 podman[92944]: 2025-12-15 10:38:31.932424584 +0000 UTC m=+0.137094013 container start 2381da65c6d65da48123c7fe84cc6588a5c09b4eb16f94e8548d93721cb9036c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_ritchie, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:38:31 compute-0 podman[92944]: 2025-12-15 10:38:31.936350515 +0000 UTC m=+0.141019904 container attach 2381da65c6d65da48123c7fe84cc6588a5c09b4eb16f94e8548d93721cb9036c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_ritchie, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 15 10:38:31 compute-0 practical_ritchie[92960]: 167 167
Dec 15 10:38:31 compute-0 systemd[1]: libpod-2381da65c6d65da48123c7fe84cc6588a5c09b4eb16f94e8548d93721cb9036c.scope: Deactivated successfully.
Dec 15 10:38:31 compute-0 podman[92944]: 2025-12-15 10:38:31.938308567 +0000 UTC m=+0.142977946 container died 2381da65c6d65da48123c7fe84cc6588a5c09b4eb16f94e8548d93721cb9036c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 15 10:38:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb12369f77b9af38560666ef8e7bf5b361d3fc2c9cb030bddfb205e73bd7484c-merged.mount: Deactivated successfully.
Dec 15 10:38:31 compute-0 podman[92944]: 2025-12-15 10:38:31.986519068 +0000 UTC m=+0.191188457 container remove 2381da65c6d65da48123c7fe84cc6588a5c09b4eb16f94e8548d93721cb9036c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:38:31 compute-0 systemd[1]: libpod-conmon-2381da65c6d65da48123c7fe84cc6588a5c09b4eb16f94e8548d93721cb9036c.scope: Deactivated successfully.
Dec 15 10:38:32 compute-0 systemd[1]: Reloading.
Dec 15 10:38:32 compute-0 systemd-rc-local-generator[93020]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:38:32 compute-0 systemd-sysv-generator[93025]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:38:32 compute-0 sudo[93033]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foonfetjdkvzkbfyxninipknswddiggg ; /usr/bin/python3'
Dec 15 10:38:32 compute-0 sudo[93033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:38:32 compute-0 systemd[1]: Reloading.
Dec 15 10:38:32 compute-0 systemd-sysv-generator[93073]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:38:32 compute-0 systemd-rc-local-generator[93069]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:38:32 compute-0 python3[93038]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:38:32 compute-0 podman[93077]: 2025-12-15 10:38:32.440288176 +0000 UTC m=+0.045507918 container create 0a884083fcf251849100047e44ea19fa049f4b9b6ee6da932afea044b51d0297 (image=quay.io/ceph/ceph:v19, name=upbeat_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 15 10:38:32 compute-0 podman[93077]: 2025-12-15 10:38:32.41987012 +0000 UTC m=+0.025089872 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:38:32 compute-0 systemd[1]: Started libpod-conmon-0a884083fcf251849100047e44ea19fa049f4b9b6ee6da932afea044b51d0297.scope.
Dec 15 10:38:32 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.ufugvl for 77365f67-614e-5a8d-b658-640395550c79...
Dec 15 10:38:32 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:38:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d81727e847f4690798a5a1f9257ccc6d31572f8ea4a7806e133090ed989ce45/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d81727e847f4690798a5a1f9257ccc6d31572f8ea4a7806e133090ed989ce45/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:32 compute-0 podman[93077]: 2025-12-15 10:38:32.558032524 +0000 UTC m=+0.163252276 container init 0a884083fcf251849100047e44ea19fa049f4b9b6ee6da932afea044b51d0297 (image=quay.io/ceph/ceph:v19, name=upbeat_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 15 10:38:32 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Dec 15 10:38:32 compute-0 podman[93077]: 2025-12-15 10:38:32.569706618 +0000 UTC m=+0.174926350 container start 0a884083fcf251849100047e44ea19fa049f4b9b6ee6da932afea044b51d0297 (image=quay.io/ceph/ceph:v19, name=upbeat_elion, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 15 10:38:32 compute-0 podman[93077]: 2025-12-15 10:38:32.573541458 +0000 UTC m=+0.178761190 container attach 0a884083fcf251849100047e44ea19fa049f4b9b6ee6da932afea044b51d0297 (image=quay.io/ceph/ceph:v19, name=upbeat_elion, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 15 10:38:32 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1989266060; not ready for session (expect reconnect)
Dec 15 10:38:32 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.aiqqke' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 15 10:38:32 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.jevpck' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 15 10:38:32 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e48 e48: 3 total, 2 up, 3 in
Dec 15 10:38:32 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 2 up, 3 in
Dec 15 10:38:32 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 15 10:38:32 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:32 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 15 10:38:32 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.2 deep-scrub starts
Dec 15 10:38:32 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.2 deep-scrub ok
Dec 15 10:38:32 compute-0 upbeat_elion[93094]: ERROR: invalid flag --daemon-type
Dec 15 10:38:32 compute-0 systemd[1]: libpod-0a884083fcf251849100047e44ea19fa049f4b9b6ee6da932afea044b51d0297.scope: Deactivated successfully.
Dec 15 10:38:32 compute-0 podman[93077]: 2025-12-15 10:38:32.643528278 +0000 UTC m=+0.248748010 container died 0a884083fcf251849100047e44ea19fa049f4b9b6ee6da932afea044b51d0297 (image=quay.io/ceph/ceph:v19, name=upbeat_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:38:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d81727e847f4690798a5a1f9257ccc6d31572f8ea4a7806e133090ed989ce45-merged.mount: Deactivated successfully.
Dec 15 10:38:32 compute-0 podman[93077]: 2025-12-15 10:38:32.681065747 +0000 UTC m=+0.286285479 container remove 0a884083fcf251849100047e44ea19fa049f4b9b6ee6da932afea044b51d0297 (image=quay.io/ceph/ceph:v19, name=upbeat_elion, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 15 10:38:32 compute-0 systemd[1]: libpod-conmon-0a884083fcf251849100047e44ea19fa049f4b9b6ee6da932afea044b51d0297.scope: Deactivated successfully.
Dec 15 10:38:32 compute-0 ceph-mon[74356]: 5.1c scrub ok
Dec 15 10:38:32 compute-0 ceph-mon[74356]: pgmap v23: 195 pgs: 1 unknown, 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 15 10:38:32 compute-0 ceph-mon[74356]: Deploying daemon rgw.rgw.compute-0.ufugvl on compute-0
Dec 15 10:38:32 compute-0 ceph-mon[74356]: 7.6 scrub starts
Dec 15 10:38:32 compute-0 ceph-mon[74356]: 7.6 scrub ok
Dec 15 10:38:32 compute-0 ceph-mon[74356]: from='client.? 192.168.122.102:0/2646887214' entity='client.rgw.rgw.compute-2.jevpck' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 15 10:38:32 compute-0 ceph-mon[74356]: from='client.? ' entity='client.rgw.rgw.compute-2.jevpck' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 15 10:38:32 compute-0 ceph-mon[74356]: from='client.? ' entity='client.rgw.rgw.compute-1.aiqqke' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 15 10:38:32 compute-0 ceph-mon[74356]: from='client.? ' entity='client.rgw.rgw.compute-2.jevpck' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 15 10:38:32 compute-0 ceph-mon[74356]: osdmap e48: 3 total, 2 up, 3 in
Dec 15 10:38:32 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:32 compute-0 sudo[93033]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:32 compute-0 podman[93175]: 2025-12-15 10:38:32.761775522 +0000 UTC m=+0.036601181 container create c18ac004eedee410b30ee1bcefaae99fedc485e898f3b29d4d9cc96240e0e6c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-rgw-rgw-compute-0-ufugvl, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 15 10:38:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/494fbcf9607dadf6085b93dde3d6c64eb1d096b31b60b13c4e9f8bf7ef8f5240/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/494fbcf9607dadf6085b93dde3d6c64eb1d096b31b60b13c4e9f8bf7ef8f5240/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/494fbcf9607dadf6085b93dde3d6c64eb1d096b31b60b13c4e9f8bf7ef8f5240/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/494fbcf9607dadf6085b93dde3d6c64eb1d096b31b60b13c4e9f8bf7ef8f5240/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.ufugvl supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:32 compute-0 podman[93175]: 2025-12-15 10:38:32.813355059 +0000 UTC m=+0.088180738 container init c18ac004eedee410b30ee1bcefaae99fedc485e898f3b29d4d9cc96240e0e6c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-rgw-rgw-compute-0-ufugvl, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 15 10:38:32 compute-0 podman[93175]: 2025-12-15 10:38:32.821852564 +0000 UTC m=+0.096678223 container start c18ac004eedee410b30ee1bcefaae99fedc485e898f3b29d4d9cc96240e0e6c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-rgw-rgw-compute-0-ufugvl, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 15 10:38:32 compute-0 bash[93175]: c18ac004eedee410b30ee1bcefaae99fedc485e898f3b29d4d9cc96240e0e6c7
Dec 15 10:38:32 compute-0 podman[93175]: 2025-12-15 10:38:32.74598261 +0000 UTC m=+0.020808289 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:38:32 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.ufugvl for 77365f67-614e-5a8d-b658-640395550c79.
Dec 15 10:38:32 compute-0 radosgw[93194]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec 15 10:38:32 compute-0 radosgw[93194]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Dec 15 10:38:32 compute-0 radosgw[93194]: framework: beast
Dec 15 10:38:32 compute-0 radosgw[93194]: framework conf key: endpoint, val: 192.168.122.100:8082
Dec 15 10:38:32 compute-0 radosgw[93194]: init_numa not setting numa affinity
Dec 15 10:38:32 compute-0 sudo[92879]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:32 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:38:32 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:32 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:38:32 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:32 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 15 10:38:32 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:33 compute-0 ceph-mgr[74651]: [progress INFO root] complete: finished ev 38ad733d-d823-4bea-97f6-5b493b2c4a3d (Updating rgw.rgw deployment (+3 -> 3))
Dec 15 10:38:33 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event 38ad733d-d823-4bea-97f6-5b493b2c4a3d (Updating rgw.rgw deployment (+3 -> 3)) in 6 seconds
Dec 15 10:38:33 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 15 10:38:33 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 15 10:38:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 15 10:38:33 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 15 10:38:33 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:33 compute-0 ceph-mgr[74651]: [progress INFO root] update: starting ev 59ca70e0-0a88-44a8-a5bf-48b9a85ed1b3 (Updating mds.cephfs deployment (+3 -> 3))
Dec 15 10:38:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.mhljub", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 15 10:38:33 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.mhljub", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 15 10:38:33 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.mhljub", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 15 10:38:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:38:33 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:33 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.mhljub on compute-2
Dec 15 10:38:33 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.mhljub on compute-2
Dec 15 10:38:33 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v26: 196 pgs: 1 unknown, 195 active+clean; 450 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 1023 B/s rd, 1.5 KiB/s wr, 3 op/s
Dec 15 10:38:33 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Dec 15 10:38:33 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1989266060; not ready for session (expect reconnect)
Dec 15 10:38:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 15 10:38:33 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Dec 15 10:38:33 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 15 10:38:33 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Dec 15 10:38:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e49 e49: 3 total, 2 up, 3 in
Dec 15 10:38:33 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 2 up, 3 in
Dec 15 10:38:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 15 10:38:33 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:33 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 15 10:38:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec 15 10:38:33 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3539254722' entity='client.rgw.rgw.compute-0.ufugvl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 15 10:38:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec 15 10:38:33 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.aiqqke' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 15 10:38:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec 15 10:38:33 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.jevpck' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 15 10:38:33 compute-0 ceph-mon[74356]: 5.1b scrub starts
Dec 15 10:38:33 compute-0 ceph-mon[74356]: 5.1b scrub ok
Dec 15 10:38:33 compute-0 ceph-mon[74356]: 7.2 deep-scrub starts
Dec 15 10:38:33 compute-0 ceph-mon[74356]: 7.2 deep-scrub ok
Dec 15 10:38:33 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:33 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:33 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:33 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:33 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:33 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.mhljub", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 15 10:38:33 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.mhljub", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 15 10:38:33 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:33 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:33 compute-0 ceph-mon[74356]: osdmap e49: 3 total, 2 up, 3 in
Dec 15 10:38:33 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:33 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/3539254722' entity='client.rgw.rgw.compute-0.ufugvl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 15 10:38:33 compute-0 ceph-mon[74356]: from='client.? 192.168.122.101:0/342753226' entity='client.rgw.rgw.compute-1.aiqqke' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 15 10:38:33 compute-0 ceph-mon[74356]: from='client.? ' entity='client.rgw.rgw.compute-1.aiqqke' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 15 10:38:33 compute-0 ceph-mon[74356]: from='client.? 192.168.122.102:0/2646887214' entity='client.rgw.rgw.compute-2.jevpck' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 15 10:38:33 compute-0 ceph-mon[74356]: from='client.? ' entity='client.rgw.rgw.compute-2.jevpck' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 15 10:38:33 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 49 pg[11.0( empty local-lis/les=0/0 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [0] r=0 lpr=49 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:38:34 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Dec 15 10:38:34 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Dec 15 10:38:34 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1989266060; not ready for session (expect reconnect)
Dec 15 10:38:34 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 15 10:38:34 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:34 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 15 10:38:34 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Dec 15 10:38:34 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3539254722' entity='client.rgw.rgw.compute-0.ufugvl' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 15 10:38:34 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.aiqqke' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 15 10:38:34 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.jevpck' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 15 10:38:34 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e50 e50: 3 total, 2 up, 3 in
Dec 15 10:38:34 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 2 up, 3 in
Dec 15 10:38:34 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 15 10:38:34 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:34 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 15 10:38:34 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 50 pg[11.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [0] r=0 lpr=49 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:38:34 compute-0 ceph-mon[74356]: 4.1a deep-scrub starts
Dec 15 10:38:34 compute-0 ceph-mon[74356]: 4.1a deep-scrub ok
Dec 15 10:38:34 compute-0 ceph-mon[74356]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 15 10:38:34 compute-0 ceph-mon[74356]: Deploying daemon mds.cephfs.compute-2.mhljub on compute-2
Dec 15 10:38:34 compute-0 ceph-mon[74356]: pgmap v26: 196 pgs: 1 unknown, 195 active+clean; 450 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 1023 B/s rd, 1.5 KiB/s wr, 3 op/s
Dec 15 10:38:34 compute-0 ceph-mon[74356]: 7.3 scrub starts
Dec 15 10:38:34 compute-0 ceph-mon[74356]: 7.3 scrub ok
Dec 15 10:38:34 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:34 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/3539254722' entity='client.rgw.rgw.compute-0.ufugvl' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 15 10:38:34 compute-0 ceph-mon[74356]: from='client.? ' entity='client.rgw.rgw.compute-1.aiqqke' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 15 10:38:34 compute-0 ceph-mon[74356]: from='client.? ' entity='client.rgw.rgw.compute-2.jevpck' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 15 10:38:34 compute-0 ceph-mon[74356]: osdmap e50: 3 total, 2 up, 3 in
Dec 15 10:38:34 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:35 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v29: 197 pgs: 2 unknown, 195 active+clean; 450 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 1023 B/s rd, 1.5 KiB/s wr, 3 op/s
Dec 15 10:38:35 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:38:35 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:38:35 compute-0 ceph-mgr[74651]: [progress INFO root] Writing back 12 completed events
Dec 15 10:38:35 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 15 10:38:35 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:38:35 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:38:35 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:38:35 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:38:35 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:35 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event b057ac9e-4f7e-4132-9886-2f7a7bbfd6a1 (Global Recovery Event) in 5 seconds
Dec 15 10:38:35 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1989266060; not ready for session (expect reconnect)
Dec 15 10:38:35 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 15 10:38:35 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.e scrub starts
Dec 15 10:38:35 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:35 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 15 10:38:35 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.e scrub ok
Dec 15 10:38:35 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Dec 15 10:38:35 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e51 e51: 3 total, 2 up, 3 in
Dec 15 10:38:35 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 2 up, 3 in
Dec 15 10:38:35 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 15 10:38:35 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:35 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 15 10:38:35 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec 15 10:38:35 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3539254722' entity='client.rgw.rgw.compute-0.ufugvl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 15 10:38:35 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec 15 10:38:35 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.aiqqke' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 15 10:38:35 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec 15 10:38:35 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.jevpck' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 15 10:38:35 compute-0 ceph-mon[74356]: 4.1b scrub starts
Dec 15 10:38:35 compute-0 ceph-mon[74356]: 4.1b scrub ok
Dec 15 10:38:35 compute-0 ceph-mon[74356]: 7.4 scrub starts
Dec 15 10:38:35 compute-0 ceph-mon[74356]: 7.4 scrub ok
Dec 15 10:38:35 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:35 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:35 compute-0 ceph-mon[74356]: OSD bench result of 5267.592940 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 15 10:38:35 compute-0 ceph-mon[74356]: osdmap e51: 3 total, 2 up, 3 in
Dec 15 10:38:35 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:35 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/3539254722' entity='client.rgw.rgw.compute-0.ufugvl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 15 10:38:35 compute-0 ceph-mon[74356]: from='client.? 192.168.122.101:0/342753226' entity='client.rgw.rgw.compute-1.aiqqke' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 15 10:38:35 compute-0 ceph-mon[74356]: from='client.? ' entity='client.rgw.rgw.compute-1.aiqqke' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 15 10:38:35 compute-0 ceph-mon[74356]: from='client.? 192.168.122.102:0/2646887214' entity='client.rgw.rgw.compute-2.jevpck' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 15 10:38:35 compute-0 ceph-mon[74356]: from='client.? ' entity='client.rgw.rgw.compute-2.jevpck' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 15 10:38:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:38:36 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:38:36 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 15 10:38:36 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.fathlc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 15 10:38:36 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.fathlc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 15 10:38:36 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.fathlc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 15 10:38:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:38:36 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:36 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.fathlc on compute-0
Dec 15 10:38:36 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.fathlc on compute-0
Dec 15 10:38:36 compute-0 sudo[93790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:38:36 compute-0 sudo[93790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:36 compute-0 sudo[93790]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:36 compute-0 sudo[93815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:38:36 compute-0 sudo[93815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).mds e2 assigned standby [v2:192.168.122.102:6804/3276111900,v1:192.168.122.102:6805/3276111900] as mds.0
Dec 15 10:38:36 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.mhljub assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 15 10:38:36 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 15 10:38:36 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 15 10:38:36 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 15 10:38:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:38:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).mds e3 new map
Dec 15 10:38:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2025-12-15T10:38:36:266603+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        3
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-15T10:38:06.056696+0000
                                           modified        2025-12-15T10:38:36.266590+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24175}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-2.mhljub{0:24175} state up:creating seq 1 addr [v2:192.168.122.102:6804/3276111900,v1:192.168.122.102:6805/3276111900] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Dec 15 10:38:36 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3276111900,v1:192.168.122.102:6805/3276111900] up:boot
Dec 15 10:38:36 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.mhljub=up:creating}
Dec 15 10:38:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.mhljub"} v 0)
Dec 15 10:38:36 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.mhljub"}]: dispatch
Dec 15 10:38:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).mds e3 all = 0
Dec 15 10:38:36 compute-0 ceph-mgr[74651]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1989266060; not ready for session (expect reconnect)
Dec 15 10:38:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 15 10:38:36 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:36 compute-0 ceph-mgr[74651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 15 10:38:36 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.f scrub starts
Dec 15 10:38:36 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.f scrub ok
Dec 15 10:38:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Dec 15 10:38:36 compute-0 podman[93879]: 2025-12-15 10:38:36.653946865 +0000 UTC m=+0.049550035 container create b005e3b0f1b2f479ae006d0d7588354d76d8ffb17a9380e3ac55c0079d914c6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_liskov, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:38:36 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3539254722' entity='client.rgw.rgw.compute-0.ufugvl' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 15 10:38:36 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.aiqqke' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 15 10:38:36 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.jevpck' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 15 10:38:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Dec 15 10:38:36 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/1989266060,v1:192.168.122.102:6801/1989266060] boot
Dec 15 10:38:36 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Dec 15 10:38:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 15 10:38:36 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec 15 10:38:36 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3539254722' entity='client.rgw.rgw.compute-0.ufugvl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[3.1b( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=52 pruub=6.123422623s) [2] r=-1 lpr=52 pi=[27,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 122.462608337s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[4.1c( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=52 pruub=8.404934883s) [2] r=-1 lpr=52 pi=[29,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.744132996s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[3.1b( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=52 pruub=6.123393059s) [2] r=-1 lpr=52 pi=[27,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 122.462608337s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[2.1b( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=52 pruub=8.052678108s) [2] r=-1 lpr=52 pi=[34,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.391899109s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[4.1c( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=52 pruub=8.404905319s) [2] r=-1 lpr=52 pi=[29,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.744132996s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[2.1b( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=52 pruub=8.052644730s) [2] r=-1 lpr=52 pi=[34,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.391899109s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[6.1b( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=52 pruub=3.125334740s) [2] r=-1 lpr=52 pi=[31,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 119.464668274s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[6.1b( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=52 pruub=3.125308275s) [2] r=-1 lpr=52 pi=[31,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 119.464668274s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[4.1d( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=52 pruub=8.404536247s) [2] r=-1 lpr=52 pi=[29,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.743988037s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[4.1d( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=52 pruub=8.404520035s) [2] r=-1 lpr=52 pi=[29,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.743988037s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[4.3( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=52 pruub=8.404635429s) [2] r=-1 lpr=52 pi=[29,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.744125366s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[3.8( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=52 pruub=6.125457287s) [2] r=-1 lpr=52 pi=[27,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 122.464950562s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[4.3( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=52 pruub=8.404623985s) [2] r=-1 lpr=52 pi=[29,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.744125366s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[3.8( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=52 pruub=6.125444889s) [2] r=-1 lpr=52 pi=[27,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 122.464950562s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[6.1( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=52 pruub=3.125714540s) [2] r=-1 lpr=52 pi=[31,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 119.465293884s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[6.1( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=52 pruub=3.125705004s) [2] r=-1 lpr=52 pi=[31,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 119.465293884s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[4.6( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=52 pruub=8.404370308s) [2] r=-1 lpr=52 pi=[29,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.744010925s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[4.6( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=52 pruub=8.404358864s) [2] r=-1 lpr=52 pi=[29,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.744010925s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[4.2( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=52 pruub=8.404298782s) [2] r=-1 lpr=52 pi=[29,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.744010925s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[4.2( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=52 pruub=8.404283524s) [2] r=-1 lpr=52 pi=[29,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.744010925s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[5.0( empty local-lis/les=29/30 n=0 ec=16/16 lis/c=29/29 les/c/f=30/30/0 sis=52 pruub=8.404136658s) [2] r=-1 lpr=52 pi=[29,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.743858337s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[3.0( empty local-lis/les=27/28 n=0 ec=14/14 lis/c=27/27 les/c/f=28/28/0 sis=52 pruub=6.125110149s) [2] r=-1 lpr=52 pi=[27,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 122.464904785s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[5.0( empty local-lis/les=29/30 n=0 ec=16/16 lis/c=29/29 les/c/f=30/30/0 sis=52 pruub=8.404082298s) [2] r=-1 lpr=52 pi=[29,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.743858337s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[3.0( empty local-lis/les=27/28 n=0 ec=14/14 lis/c=27/27 les/c/f=28/28/0 sis=52 pruub=6.125101089s) [2] r=-1 lpr=52 pi=[27,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 122.464904785s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[4.19( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=52 pruub=8.404175758s) [2] r=-1 lpr=52 pi=[29,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.744041443s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[2.a( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=52 pruub=8.050262451s) [2] r=-1 lpr=52 pi=[34,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.390220642s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[2.d( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=52 pruub=8.050423622s) [2] r=-1 lpr=52 pi=[34,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.390380859s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[5.d( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=52 pruub=8.403834343s) [2] r=-1 lpr=52 pi=[29,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.743820190s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[2.d( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=52 pruub=8.050412178s) [2] r=-1 lpr=52 pi=[34,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.390380859s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[5.d( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=52 pruub=8.403824806s) [2] r=-1 lpr=52 pi=[29,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.743820190s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[2.a( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=52 pruub=8.050248146s) [2] r=-1 lpr=52 pi=[34,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.390220642s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[5.b( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=52 pruub=8.403643608s) [2] r=-1 lpr=52 pi=[29,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.743713379s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[2.c( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=52 pruub=8.050374031s) [2] r=-1 lpr=52 pi=[34,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.390449524s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[5.b( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=52 pruub=8.403635979s) [2] r=-1 lpr=52 pi=[29,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.743713379s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[2.c( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=52 pruub=8.050364494s) [2] r=-1 lpr=52 pi=[34,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.390449524s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[7.a( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=34/34 les/c/f=35/35/0 sis=52 pruub=8.048559189s) [2] r=-1 lpr=52 pi=[34,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.388687134s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[5.8( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=52 pruub=8.403592110s) [2] r=-1 lpr=52 pi=[29,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.743728638s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[5.8( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=52 pruub=8.403583527s) [2] r=-1 lpr=52 pi=[29,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.743728638s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[7.a( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=34/34 les/c/f=35/35/0 sis=52 pruub=8.048544884s) [2] r=-1 lpr=52 pi=[34,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.388687134s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[7.14( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=34/34 les/c/f=35/35/0 sis=52 pruub=8.048439980s) [2] r=-1 lpr=52 pi=[34,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.388648987s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[7.14( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=34/34 les/c/f=35/35/0 sis=52 pruub=8.048431396s) [2] r=-1 lpr=52 pi=[34,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.388648987s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[2.10( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=52 pruub=8.048407555s) [2] r=-1 lpr=52 pi=[34,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.388641357s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[2.10( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=52 pruub=8.048397064s) [2] r=-1 lpr=52 pi=[34,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.388641357s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[4.14( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=52 pruub=8.403204918s) [2] r=-1 lpr=52 pi=[29,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.743522644s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[2.13( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=52 pruub=8.048357010s) [2] r=-1 lpr=52 pi=[34,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.388671875s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[4.14( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=52 pruub=8.403196335s) [2] r=-1 lpr=52 pi=[29,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.743522644s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[2.15( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=52 pruub=8.048279762s) [2] r=-1 lpr=52 pi=[34,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.388641357s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[2.13( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=52 pruub=8.048341751s) [2] r=-1 lpr=52 pi=[34,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.388671875s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[2.15( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=34/34 les/c/f=35/35/0 sis=52 pruub=8.048269272s) [2] r=-1 lpr=52 pi=[34,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.388641357s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[5.12( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=52 pruub=8.403246880s) [2] r=-1 lpr=52 pi=[29,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.743659973s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[5.12( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=52 pruub=8.403237343s) [2] r=-1 lpr=52 pi=[29,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.743659973s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[5.13( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=52 pruub=8.403084755s) [2] r=-1 lpr=52 pi=[29,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.743537903s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[5.13( empty local-lis/les=29/30 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=52 pruub=8.403075218s) [2] r=-1 lpr=52 pi=[29,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.743537903s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[7.1d( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=34/34 les/c/f=35/35/0 sis=52 pruub=8.041940689s) [2] r=-1 lpr=52 pi=[34,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.382545471s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[7.1d( empty local-lis/les=34/35 n=0 ec=32/18 lis/c=34/34 les/c/f=35/35/0 sis=52 pruub=8.041869164s) [2] r=-1 lpr=52 pi=[34,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.382545471s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 52 pg[4.19( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=52 pruub=8.403380394s) [2] r=-1 lpr=52 pi=[29,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 124.744041443s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:38:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec 15 10:38:36 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.jevpck' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 15 10:38:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec 15 10:38:36 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.aiqqke' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 15 10:38:36 compute-0 systemd[1]: Started libpod-conmon-b005e3b0f1b2f479ae006d0d7588354d76d8ffb17a9380e3ac55c0079d914c6b.scope.
Dec 15 10:38:36 compute-0 podman[93879]: 2025-12-15 10:38:36.631053661 +0000 UTC m=+0.026656931 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:38:36 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:38:36 compute-0 podman[93879]: 2025-12-15 10:38:36.741465611 +0000 UTC m=+0.137068811 container init b005e3b0f1b2f479ae006d0d7588354d76d8ffb17a9380e3ac55c0079d914c6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:38:36 compute-0 podman[93879]: 2025-12-15 10:38:36.747467649 +0000 UTC m=+0.143070829 container start b005e3b0f1b2f479ae006d0d7588354d76d8ffb17a9380e3ac55c0079d914c6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:38:36 compute-0 podman[93879]: 2025-12-15 10:38:36.750807413 +0000 UTC m=+0.146410613 container attach b005e3b0f1b2f479ae006d0d7588354d76d8ffb17a9380e3ac55c0079d914c6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 15 10:38:36 compute-0 relaxed_liskov[93896]: 167 167
Dec 15 10:38:36 compute-0 systemd[1]: libpod-b005e3b0f1b2f479ae006d0d7588354d76d8ffb17a9380e3ac55c0079d914c6b.scope: Deactivated successfully.
Dec 15 10:38:36 compute-0 podman[93879]: 2025-12-15 10:38:36.755000413 +0000 UTC m=+0.150603653 container died b005e3b0f1b2f479ae006d0d7588354d76d8ffb17a9380e3ac55c0079d914c6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_liskov, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 15 10:38:36 compute-0 ceph-mon[74356]: 5.18 scrub starts
Dec 15 10:38:36 compute-0 ceph-mon[74356]: 5.18 scrub ok
Dec 15 10:38:36 compute-0 ceph-mon[74356]: pgmap v29: 197 pgs: 2 unknown, 195 active+clean; 450 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 1023 B/s rd, 1.5 KiB/s wr, 3 op/s
Dec 15 10:38:36 compute-0 ceph-mon[74356]: 7.e scrub starts
Dec 15 10:38:36 compute-0 ceph-mon[74356]: 7.e scrub ok
Dec 15 10:38:36 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:36 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:36 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:36 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.fathlc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 15 10:38:36 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.fathlc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 15 10:38:36 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:36 compute-0 ceph-mon[74356]: daemon mds.cephfs.compute-2.mhljub assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 15 10:38:36 compute-0 ceph-mon[74356]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 15 10:38:36 compute-0 ceph-mon[74356]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 15 10:38:36 compute-0 ceph-mon[74356]: Cluster is now healthy
Dec 15 10:38:36 compute-0 ceph-mon[74356]: mds.? [v2:192.168.122.102:6804/3276111900,v1:192.168.122.102:6805/3276111900] up:boot
Dec 15 10:38:36 compute-0 ceph-mon[74356]: fsmap cephfs:1 {0=cephfs.compute-2.mhljub=up:creating}
Dec 15 10:38:36 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.mhljub"}]: dispatch
Dec 15 10:38:36 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:36 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/3539254722' entity='client.rgw.rgw.compute-0.ufugvl' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 15 10:38:36 compute-0 ceph-mon[74356]: from='client.? ' entity='client.rgw.rgw.compute-1.aiqqke' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 15 10:38:36 compute-0 ceph-mon[74356]: from='client.? ' entity='client.rgw.rgw.compute-2.jevpck' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 15 10:38:36 compute-0 ceph-mon[74356]: osd.2 [v2:192.168.122.102:6800/1989266060,v1:192.168.122.102:6801/1989266060] boot
Dec 15 10:38:36 compute-0 ceph-mon[74356]: osdmap e52: 3 total, 3 up, 3 in
Dec 15 10:38:36 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:38:36 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/3539254722' entity='client.rgw.rgw.compute-0.ufugvl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 15 10:38:36 compute-0 ceph-mon[74356]: from='client.? 192.168.122.102:0/2646887214' entity='client.rgw.rgw.compute-2.jevpck' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 15 10:38:36 compute-0 ceph-mon[74356]: from='client.? 192.168.122.101:0/342753226' entity='client.rgw.rgw.compute-1.aiqqke' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 15 10:38:36 compute-0 ceph-mon[74356]: from='client.? ' entity='client.rgw.rgw.compute-2.jevpck' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 15 10:38:36 compute-0 ceph-mon[74356]: from='client.? ' entity='client.rgw.rgw.compute-1.aiqqke' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 15 10:38:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb583c85cbf1b3e034a4b9d268fbda91f2db0a401678af94a2894b687200c804-merged.mount: Deactivated successfully.
Dec 15 10:38:36 compute-0 podman[93879]: 2025-12-15 10:38:36.795602418 +0000 UTC m=+0.191205598 container remove b005e3b0f1b2f479ae006d0d7588354d76d8ffb17a9380e3ac55c0079d914c6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_liskov, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:38:36 compute-0 systemd[1]: libpod-conmon-b005e3b0f1b2f479ae006d0d7588354d76d8ffb17a9380e3ac55c0079d914c6b.scope: Deactivated successfully.
Dec 15 10:38:36 compute-0 systemd[1]: Reloading.
Dec 15 10:38:36 compute-0 systemd-sysv-generator[93942]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:38:36 compute-0 systemd-rc-local-generator[93939]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:38:37 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v32: 198 pgs: 1 creating+peering, 197 active+clean; 450 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 7.7 KiB/s rd, 511 B/s wr, 9 op/s
Dec 15 10:38:37 compute-0 systemd[1]: Reloading.
Dec 15 10:38:37 compute-0 systemd-rc-local-generator[93982]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:38:37 compute-0 systemd-sysv-generator[93986]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:38:37 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.fathlc for 77365f67-614e-5a8d-b658-640395550c79...
Dec 15 10:38:37 compute-0 podman[94037]: 2025-12-15 10:38:37.590352159 +0000 UTC m=+0.035782016 container create 085c5aa5055bc815209233cd26442b3d66e1b5f7ae99b5916851f7d79fef7835 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mds-cephfs-compute-0-fathlc, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Dec 15 10:38:37 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Dec 15 10:38:37 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Dec 15 10:38:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b84dafd2eadb712ae42b3bb55d12d9c49fe75085bb8b628becf08e92ae9ac1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b84dafd2eadb712ae42b3bb55d12d9c49fe75085bb8b628becf08e92ae9ac1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b84dafd2eadb712ae42b3bb55d12d9c49fe75085bb8b628becf08e92ae9ac1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b84dafd2eadb712ae42b3bb55d12d9c49fe75085bb8b628becf08e92ae9ac1d/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.fathlc supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:37 compute-0 podman[94037]: 2025-12-15 10:38:37.651134973 +0000 UTC m=+0.096564840 container init 085c5aa5055bc815209233cd26442b3d66e1b5f7ae99b5916851f7d79fef7835 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mds-cephfs-compute-0-fathlc, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 15 10:38:37 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Dec 15 10:38:37 compute-0 podman[94037]: 2025-12-15 10:38:37.657127239 +0000 UTC m=+0.102557096 container start 085c5aa5055bc815209233cd26442b3d66e1b5f7ae99b5916851f7d79fef7835 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mds-cephfs-compute-0-fathlc, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:38:37 compute-0 bash[94037]: 085c5aa5055bc815209233cd26442b3d66e1b5f7ae99b5916851f7d79fef7835
Dec 15 10:38:37 compute-0 podman[94037]: 2025-12-15 10:38:37.575048062 +0000 UTC m=+0.020477939 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:38:37 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3539254722' entity='client.rgw.rgw.compute-0.ufugvl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 15 10:38:37 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.jevpck' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 15 10:38:37 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.aiqqke' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 15 10:38:37 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Dec 15 10:38:37 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.fathlc for 77365f67-614e-5a8d-b658-640395550c79.
Dec 15 10:38:37 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Dec 15 10:38:37 compute-0 ceph-mds[94057]: set uid:gid to 167:167 (ceph:ceph)
Dec 15 10:38:37 compute-0 ceph-mds[94057]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Dec 15 10:38:37 compute-0 ceph-mds[94057]: main not setting numa affinity
Dec 15 10:38:37 compute-0 ceph-mds[94057]: pidfile_write: ignore empty --pid-file
Dec 15 10:38:37 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mds-cephfs-compute-0-fathlc[94053]: starting mds.cephfs.compute-0.fathlc at 
Dec 15 10:38:37 compute-0 ceph-mds[94057]: mds.cephfs.compute-0.fathlc Updating MDS map to version 3 from mon.0
Dec 15 10:38:37 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.mhljub is now active in filesystem cephfs as rank 0
Dec 15 10:38:37 compute-0 sudo[93815]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:37 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:38:37 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:37 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:38:37 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:37 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 15 10:38:37 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:37 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.mmswte", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 15 10:38:37 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.mmswte", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 15 10:38:37 compute-0 ceph-mon[74356]: 4.18 deep-scrub starts
Dec 15 10:38:37 compute-0 ceph-mon[74356]: 4.18 deep-scrub ok
Dec 15 10:38:37 compute-0 ceph-mon[74356]: Deploying daemon mds.cephfs.compute-0.fathlc on compute-0
Dec 15 10:38:37 compute-0 ceph-mon[74356]: 7.f scrub starts
Dec 15 10:38:37 compute-0 ceph-mon[74356]: 7.f scrub ok
Dec 15 10:38:37 compute-0 ceph-mon[74356]: 6.15 scrub starts
Dec 15 10:38:37 compute-0 ceph-mon[74356]: 6.15 scrub ok
Dec 15 10:38:37 compute-0 ceph-mon[74356]: from='client.? 192.168.122.100:0/3539254722' entity='client.rgw.rgw.compute-0.ufugvl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 15 10:38:37 compute-0 ceph-mon[74356]: from='client.? ' entity='client.rgw.rgw.compute-2.jevpck' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 15 10:38:37 compute-0 ceph-mon[74356]: from='client.? ' entity='client.rgw.rgw.compute-1.aiqqke' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 15 10:38:37 compute-0 ceph-mon[74356]: osdmap e53: 3 total, 3 up, 3 in
Dec 15 10:38:37 compute-0 ceph-mon[74356]: daemon mds.cephfs.compute-2.mhljub is now active in filesystem cephfs as rank 0
Dec 15 10:38:37 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:37 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:37 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.mmswte", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 15 10:38:37 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:38:37 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:37 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.mmswte on compute-1
Dec 15 10:38:37 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.mmswte on compute-1
Dec 15 10:38:37 compute-0 radosgw[93194]: v1 topic migration: starting v1 topic migration..
Dec 15 10:38:37 compute-0 radosgw[93194]: LDAP not started since no server URIs were provided in the configuration.
Dec 15 10:38:37 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-rgw-rgw-compute-0-ufugvl[93190]: 2025-12-15T10:38:37.910+0000 7f1bce10f980 -1 LDAP not started since no server URIs were provided in the configuration.
Dec 15 10:38:37 compute-0 radosgw[93194]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Dec 15 10:38:37 compute-0 radosgw[93194]: v1 topic migration: finished v1 topic migration
Dec 15 10:38:37 compute-0 radosgw[93194]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Dec 15 10:38:37 compute-0 radosgw[93194]: framework: beast
Dec 15 10:38:37 compute-0 radosgw[93194]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec 15 10:38:37 compute-0 radosgw[93194]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec 15 10:38:37 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 15 10:38:37 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 15 10:38:37 compute-0 radosgw[93194]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Dec 15 10:38:37 compute-0 radosgw[93194]: starting handler: beast
Dec 15 10:38:37 compute-0 radosgw[93194]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Dec 15 10:38:37 compute-0 radosgw[93194]: set uid:gid to 167:167 (ceph:ceph)
Dec 15 10:38:37 compute-0 radosgw[93194]: mgrc service_daemon_register rgw.14421 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.ufugvl,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025,kernel_version=5.14.0-648.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864300,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=21f1ce38-f06e-4b6a-a658-42a5e73aa37d,zone_name=default,zonegroup_id=acfbbd99-4f79-4f36-9c96-89b7739f8e4b,zonegroup_name=default}
Dec 15 10:38:37 compute-0 radosgw[93194]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Dec 15 10:38:38 compute-0 radosgw[93194]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Dec 15 10:38:38 compute-0 radosgw[93194]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Dec 15 10:38:38 compute-0 radosgw[93194]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Dec 15 10:38:38 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Dec 15 10:38:38 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Dec 15 10:38:38 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Dec 15 10:38:38 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).mds e4 new map
Dec 15 10:38:38 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2025-12-15T10:38:38:665973+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-15T10:38:06.056696+0000
                                           modified        2025-12-15T10:38:38.665969+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24175}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24175 members: 24175
                                           [mds.cephfs.compute-2.mhljub{0:24175} state up:active seq 2 addr [v2:192.168.122.102:6804/3276111900,v1:192.168.122.102:6805/3276111900] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.fathlc{-1:14427} state up:standby seq 1 addr [v2:192.168.122.100:6806/1912978174,v1:192.168.122.100:6807/1912978174] compat {c=[1],r=[1],i=[1fff]}]
Dec 15 10:38:38 compute-0 ceph-mds[94057]: mds.cephfs.compute-0.fathlc Updating MDS map to version 4 from mon.0
Dec 15 10:38:38 compute-0 ceph-mds[94057]: mds.cephfs.compute-0.fathlc Monitors have assigned me to become a standby
Dec 15 10:38:38 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/1912978174,v1:192.168.122.100:6807/1912978174] up:boot
Dec 15 10:38:38 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3276111900,v1:192.168.122.102:6805/3276111900] up:active
Dec 15 10:38:38 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.mhljub=up:active} 1 up:standby
Dec 15 10:38:38 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.fathlc"} v 0)
Dec 15 10:38:38 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.fathlc"}]: dispatch
Dec 15 10:38:38 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).mds e4 all = 0
Dec 15 10:38:38 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).mds e5 new map
Dec 15 10:38:38 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2025-12-15T10:38:38:688636+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-15T10:38:06.056696+0000
                                           modified        2025-12-15T10:38:38.665969+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24175}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24175 members: 24175
                                           [mds.cephfs.compute-2.mhljub{0:24175} state up:active seq 2 addr [v2:192.168.122.102:6804/3276111900,v1:192.168.122.102:6805/3276111900] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.fathlc{-1:14427} state up:standby seq 1 addr [v2:192.168.122.100:6806/1912978174,v1:192.168.122.100:6807/1912978174] compat {c=[1],r=[1],i=[1fff]}]
Dec 15 10:38:38 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Dec 15 10:38:38 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Dec 15 10:38:38 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.mhljub=up:active} 1 up:standby
Dec 15 10:38:38 compute-0 ceph-mon[74356]: pgmap v32: 198 pgs: 1 creating+peering, 197 active+clean; 450 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 7.7 KiB/s rd, 511 B/s wr, 9 op/s
Dec 15 10:38:38 compute-0 ceph-mon[74356]: 7.8 scrub starts
Dec 15 10:38:38 compute-0 ceph-mon[74356]: 7.8 scrub ok
Dec 15 10:38:38 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:38 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.mmswte", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 15 10:38:38 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.mmswte", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 15 10:38:38 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:38 compute-0 ceph-mon[74356]: Deploying daemon mds.cephfs.compute-1.mmswte on compute-1
Dec 15 10:38:38 compute-0 ceph-mon[74356]: 6.1e scrub starts
Dec 15 10:38:38 compute-0 ceph-mon[74356]: 6.1e scrub ok
Dec 15 10:38:38 compute-0 ceph-mon[74356]: mds.? [v2:192.168.122.100:6806/1912978174,v1:192.168.122.100:6807/1912978174] up:boot
Dec 15 10:38:38 compute-0 ceph-mon[74356]: mds.? [v2:192.168.122.102:6804/3276111900,v1:192.168.122.102:6805/3276111900] up:active
Dec 15 10:38:38 compute-0 ceph-mon[74356]: fsmap cephfs:1 {0=cephfs.compute-2.mhljub=up:active} 1 up:standby
Dec 15 10:38:38 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.fathlc"}]: dispatch
Dec 15 10:38:38 compute-0 ceph-mon[74356]: osdmap e54: 3 total, 3 up, 3 in
Dec 15 10:38:38 compute-0 ceph-mon[74356]: fsmap cephfs:1 {0=cephfs.compute-2.mhljub=up:active} 1 up:standby
Dec 15 10:38:39 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v35: 198 pgs: 57 peering, 1 creating+peering, 140 active+clean; 450 KiB data, 481 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 511 B/s wr, 9 op/s
Dec 15 10:38:39 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.b scrub starts
Dec 15 10:38:39 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.b scrub ok
Dec 15 10:38:39 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:38:40 compute-0 ceph-mon[74356]: 6.8 deep-scrub starts
Dec 15 10:38:40 compute-0 ceph-mon[74356]: 6.8 deep-scrub ok
Dec 15 10:38:40 compute-0 ceph-mon[74356]: 7.9 scrub starts
Dec 15 10:38:40 compute-0 ceph-mon[74356]: 7.9 scrub ok
Dec 15 10:38:40 compute-0 ceph-mon[74356]: 6.7 scrub starts
Dec 15 10:38:40 compute-0 ceph-mon[74356]: 5.e scrub starts
Dec 15 10:38:40 compute-0 ceph-mon[74356]: 5.e scrub ok
Dec 15 10:38:40 compute-0 ceph-mgr[74651]: [progress INFO root] Writing back 13 completed events
Dec 15 10:38:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 15 10:38:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).mds e6 new map
Dec 15 10:38:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).mds e6 print_map
                                           e6
                                           btime 2025-12-15T10:38:39:938332+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-15T10:38:06.056696+0000
                                           modified        2025-12-15T10:38:38.665969+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24175}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24175 members: 24175
                                           [mds.cephfs.compute-2.mhljub{0:24175} state up:active seq 2 addr [v2:192.168.122.102:6804/3276111900,v1:192.168.122.102:6805/3276111900] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.fathlc{-1:14427} state up:standby seq 1 addr [v2:192.168.122.100:6806/1912978174,v1:192.168.122.100:6807/1912978174] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.mmswte{-1:24170} state up:standby seq 1 addr [v2:192.168.122.101:6804/664164116,v1:192.168.122.101:6805/664164116] compat {c=[1],r=[1],i=[1fff]}]
Dec 15 10:38:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:40 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/664164116,v1:192.168.122.101:6805/664164116] up:boot
Dec 15 10:38:40 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.mhljub=up:active} 2 up:standby
Dec 15 10:38:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.mmswte"} v 0)
Dec 15 10:38:40 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.mmswte"}]: dispatch
Dec 15 10:38:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).mds e6 all = 0
Dec 15 10:38:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:38:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:40 compute-0 ceph-mgr[74651]: [progress WARNING root] Starting Global Recovery Event,58 pgs not in active + clean state
Dec 15 10:38:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 15 10:38:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:40 compute-0 ceph-mgr[74651]: [progress INFO root] complete: finished ev 59ca70e0-0a88-44a8-a5bf-48b9a85ed1b3 (Updating mds.cephfs deployment (+3 -> 3))
Dec 15 10:38:40 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event 59ca70e0-0a88-44a8-a5bf-48b9a85ed1b3 (Updating mds.cephfs deployment (+3 -> 3)) in 7 seconds
Dec 15 10:38:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Dec 15 10:38:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 15 10:38:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:40 compute-0 ceph-mgr[74651]: [progress INFO root] update: starting ev f98947ca-ddb4-457f-af3e-abf764889564 (Updating nfs.cephfs deployment (+3 -> 3))
Dec 15 10:38:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 15 10:38:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:40 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.segvuq
Dec 15 10:38:40 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.segvuq
Dec 15 10:38:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.segvuq", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec 15 10:38:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.segvuq", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 15 10:38:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.segvuq", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 15 10:38:40 compute-0 ceph-mgr[74651]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec 15 10:38:40 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec 15 10:38:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec 15 10:38:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 15 10:38:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 15 10:38:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:38:40 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:40 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Dec 15 10:38:40 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Dec 15 10:38:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec 15 10:38:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 15 10:38:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 15 10:38:40 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec 15 10:38:40 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec 15 10:38:40 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.segvuq-rgw
Dec 15 10:38:40 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.segvuq-rgw
Dec 15 10:38:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.segvuq-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 15 10:38:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.segvuq-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 15 10:38:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.segvuq-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 15 10:38:40 compute-0 ceph-mgr[74651]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.segvuq's ganesha conf is defaulting to empty
Dec 15 10:38:40 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.segvuq's ganesha conf is defaulting to empty
Dec 15 10:38:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:38:40 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:40 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.segvuq on compute-1
Dec 15 10:38:40 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.segvuq on compute-1
Dec 15 10:38:41 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v36: 198 pgs: 57 peering, 1 creating+peering, 140 active+clean; 450 KiB data, 481 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 380 B/s wr, 7 op/s
Dec 15 10:38:41 compute-0 ceph-mon[74356]: 6.7 scrub ok
Dec 15 10:38:41 compute-0 ceph-mon[74356]: pgmap v35: 198 pgs: 57 peering, 1 creating+peering, 140 active+clean; 450 KiB data, 481 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 511 B/s wr, 9 op/s
Dec 15 10:38:41 compute-0 ceph-mon[74356]: 7.b scrub starts
Dec 15 10:38:41 compute-0 ceph-mon[74356]: 7.b scrub ok
Dec 15 10:38:41 compute-0 ceph-mon[74356]: 6.a scrub starts
Dec 15 10:38:41 compute-0 ceph-mon[74356]: 6.a scrub ok
Dec 15 10:38:41 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:41 compute-0 ceph-mon[74356]: mds.? [v2:192.168.122.101:6804/664164116,v1:192.168.122.101:6805/664164116] up:boot
Dec 15 10:38:41 compute-0 ceph-mon[74356]: fsmap cephfs:1 {0=cephfs.compute-2.mhljub=up:active} 2 up:standby
Dec 15 10:38:41 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.mmswte"}]: dispatch
Dec 15 10:38:41 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:41 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:41 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:41 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:41 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:41 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:41 compute-0 ceph-mon[74356]: Creating key for client.nfs.cephfs.0.0.compute-1.segvuq
Dec 15 10:38:41 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.segvuq", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 15 10:38:41 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.segvuq", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 15 10:38:41 compute-0 ceph-mon[74356]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec 15 10:38:41 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 15 10:38:41 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 15 10:38:41 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:41 compute-0 ceph-mon[74356]: 5.1a deep-scrub starts
Dec 15 10:38:41 compute-0 ceph-mon[74356]: 5.1a deep-scrub ok
Dec 15 10:38:41 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 15 10:38:41 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 15 10:38:41 compute-0 ceph-mon[74356]: Rados config object exists: conf-nfs.cephfs
Dec 15 10:38:41 compute-0 ceph-mon[74356]: Creating key for client.nfs.cephfs.0.0.compute-1.segvuq-rgw
Dec 15 10:38:41 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.segvuq-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 15 10:38:41 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.segvuq-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 15 10:38:41 compute-0 ceph-mon[74356]: Bind address in nfs.cephfs.0.0.compute-1.segvuq's ganesha conf is defaulting to empty
Dec 15 10:38:41 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:41 compute-0 ceph-mon[74356]: 6.5 scrub starts
Dec 15 10:38:41 compute-0 ceph-mon[74356]: Deploying daemon nfs.cephfs.0.0.compute-1.segvuq on compute-1
Dec 15 10:38:41 compute-0 ceph-mon[74356]: 6.5 scrub ok
Dec 15 10:38:41 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:38:41 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Dec 15 10:38:41 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Dec 15 10:38:42 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).mds e7 new map
Dec 15 10:38:42 compute-0 ceph-mds[94057]: mds.cephfs.compute-0.fathlc Updating MDS map to version 7 from mon.0
Dec 15 10:38:42 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).mds e7 print_map
                                           e7
                                           btime 2025-12-15T10:38:42:041607+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-15T10:38:06.056696+0000
                                           modified        2025-12-15T10:38:41.726916+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24175}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24175 members: 24175
                                           [mds.cephfs.compute-2.mhljub{0:24175} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3276111900,v1:192.168.122.102:6805/3276111900] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.fathlc{-1:14427} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/1912978174,v1:192.168.122.100:6807/1912978174] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.mmswte{-1:24170} state up:standby seq 1 addr [v2:192.168.122.101:6804/664164116,v1:192.168.122.101:6805/664164116] compat {c=[1],r=[1],i=[1fff]}]
Dec 15 10:38:42 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/1912978174,v1:192.168.122.100:6807/1912978174] up:standby
Dec 15 10:38:42 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3276111900,v1:192.168.122.102:6805/3276111900] up:active
Dec 15 10:38:42 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.mhljub=up:active} 2 up:standby
Dec 15 10:38:42 compute-0 ceph-mon[74356]: 6.14 scrub starts
Dec 15 10:38:42 compute-0 ceph-mon[74356]: 6.14 scrub ok
Dec 15 10:38:42 compute-0 ceph-mon[74356]: pgmap v36: 198 pgs: 57 peering, 1 creating+peering, 140 active+clean; 450 KiB data, 481 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 380 B/s wr, 7 op/s
Dec 15 10:38:42 compute-0 ceph-mon[74356]: 7.14 deep-scrub starts
Dec 15 10:38:42 compute-0 ceph-mon[74356]: 7.14 deep-scrub ok
Dec 15 10:38:42 compute-0 ceph-mon[74356]: 6.2 scrub starts
Dec 15 10:38:42 compute-0 ceph-mon[74356]: 6.2 scrub ok
Dec 15 10:38:42 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.10 deep-scrub starts
Dec 15 10:38:42 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.10 deep-scrub ok
Dec 15 10:38:42 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:38:42 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:42 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:38:42 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:42 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 15 10:38:42 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:42 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.uezrcp
Dec 15 10:38:42 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.uezrcp
Dec 15 10:38:42 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.uezrcp", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec 15 10:38:42 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.uezrcp", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 15 10:38:42 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.uezrcp", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 15 10:38:42 compute-0 ceph-mgr[74651]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec 15 10:38:42 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec 15 10:38:42 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec 15 10:38:42 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 15 10:38:42 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 15 10:38:42 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:38:42 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:42 compute-0 sudo[94171]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abtnftmdctghwjuxnxbsqlspzmedmehd ; /usr/bin/python3'
Dec 15 10:38:42 compute-0 sudo[94171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:38:42 compute-0 python3[94174]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:38:43 compute-0 podman[94190]: 2025-12-15 10:38:43.013289064 +0000 UTC m=+0.046453489 container create 5706adfdec7a8d16cbc1d0b8f646892531be73490667ebb8fa6585bbfeb3680e (image=quay.io/ceph/ceph:v19, name=awesome_visvesvaraya, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True)
Dec 15 10:38:43 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v37: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 221 KiB/s rd, 6.9 KiB/s wr, 408 op/s
Dec 15 10:38:43 compute-0 systemd[1]: Started libpod-conmon-5706adfdec7a8d16cbc1d0b8f646892531be73490667ebb8fa6585bbfeb3680e.scope.
Dec 15 10:38:43 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/325946c495e05a6124a0bda8004b136f31d85301535994385772baeaa0c67f26/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/325946c495e05a6124a0bda8004b136f31d85301535994385772baeaa0c67f26/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:43 compute-0 podman[94190]: 2025-12-15 10:38:42.992489596 +0000 UTC m=+0.025654011 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:38:43 compute-0 podman[94190]: 2025-12-15 10:38:43.092505621 +0000 UTC m=+0.125670006 container init 5706adfdec7a8d16cbc1d0b8f646892531be73490667ebb8fa6585bbfeb3680e (image=quay.io/ceph/ceph:v19, name=awesome_visvesvaraya, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 15 10:38:43 compute-0 podman[94190]: 2025-12-15 10:38:43.09887147 +0000 UTC m=+0.132035825 container start 5706adfdec7a8d16cbc1d0b8f646892531be73490667ebb8fa6585bbfeb3680e (image=quay.io/ceph/ceph:v19, name=awesome_visvesvaraya, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 15 10:38:43 compute-0 podman[94190]: 2025-12-15 10:38:43.102988669 +0000 UTC m=+0.136153014 container attach 5706adfdec7a8d16cbc1d0b8f646892531be73490667ebb8fa6585bbfeb3680e (image=quay.io/ceph/ceph:v19, name=awesome_visvesvaraya, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 15 10:38:43 compute-0 awesome_visvesvaraya[94205]: ERROR: invalid flag --daemon-type
Dec 15 10:38:43 compute-0 systemd[1]: libpod-5706adfdec7a8d16cbc1d0b8f646892531be73490667ebb8fa6585bbfeb3680e.scope: Deactivated successfully.
Dec 15 10:38:43 compute-0 podman[94190]: 2025-12-15 10:38:43.156806475 +0000 UTC m=+0.189970840 container died 5706adfdec7a8d16cbc1d0b8f646892531be73490667ebb8fa6585bbfeb3680e (image=quay.io/ceph/ceph:v19, name=awesome_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:38:43 compute-0 ceph-mon[74356]: 6.16 scrub starts
Dec 15 10:38:43 compute-0 ceph-mon[74356]: 6.16 scrub ok
Dec 15 10:38:43 compute-0 ceph-mon[74356]: mds.? [v2:192.168.122.100:6806/1912978174,v1:192.168.122.100:6807/1912978174] up:standby
Dec 15 10:38:43 compute-0 ceph-mon[74356]: mds.? [v2:192.168.122.102:6804/3276111900,v1:192.168.122.102:6805/3276111900] up:active
Dec 15 10:38:43 compute-0 ceph-mon[74356]: fsmap cephfs:1 {0=cephfs.compute-2.mhljub=up:active} 2 up:standby
Dec 15 10:38:43 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:43 compute-0 ceph-mon[74356]: 7.1d scrub starts
Dec 15 10:38:43 compute-0 ceph-mon[74356]: 7.1d scrub ok
Dec 15 10:38:43 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:43 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:43 compute-0 ceph-mon[74356]: Creating key for client.nfs.cephfs.1.0.compute-2.uezrcp
Dec 15 10:38:43 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.uezrcp", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 15 10:38:43 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.uezrcp", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 15 10:38:43 compute-0 ceph-mon[74356]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec 15 10:38:43 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 15 10:38:43 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 15 10:38:43 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:43 compute-0 ceph-mon[74356]: 6.d scrub starts
Dec 15 10:38:43 compute-0 ceph-mon[74356]: 6.d scrub ok
Dec 15 10:38:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-325946c495e05a6124a0bda8004b136f31d85301535994385772baeaa0c67f26-merged.mount: Deactivated successfully.
Dec 15 10:38:43 compute-0 podman[94190]: 2025-12-15 10:38:43.193134407 +0000 UTC m=+0.226298752 container remove 5706adfdec7a8d16cbc1d0b8f646892531be73490667ebb8fa6585bbfeb3680e (image=quay.io/ceph/ceph:v19, name=awesome_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 15 10:38:43 compute-0 systemd[1]: libpod-conmon-5706adfdec7a8d16cbc1d0b8f646892531be73490667ebb8fa6585bbfeb3680e.scope: Deactivated successfully.
Dec 15 10:38:43 compute-0 sudo[94171]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:43 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Dec 15 10:38:43 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Dec 15 10:38:44 compute-0 ceph-mon[74356]: 7.10 deep-scrub starts
Dec 15 10:38:44 compute-0 ceph-mon[74356]: 7.10 deep-scrub ok
Dec 15 10:38:44 compute-0 ceph-mon[74356]: pgmap v37: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 221 KiB/s rd, 6.9 KiB/s wr, 408 op/s
Dec 15 10:38:44 compute-0 ceph-mon[74356]: 7.a scrub starts
Dec 15 10:38:44 compute-0 ceph-mon[74356]: 7.a scrub ok
Dec 15 10:38:44 compute-0 ceph-mon[74356]: 6.e scrub starts
Dec 15 10:38:44 compute-0 ceph-mon[74356]: 6.e scrub ok
Dec 15 10:38:44 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Dec 15 10:38:44 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Dec 15 10:38:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).mds e8 new map
Dec 15 10:38:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).mds e8 print_map
                                           e8
                                           btime 2025-12-15T10:38:44:768584+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-15T10:38:06.056696+0000
                                           modified        2025-12-15T10:38:41.726916+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24175}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24175 members: 24175
                                           [mds.cephfs.compute-2.mhljub{0:24175} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3276111900,v1:192.168.122.102:6805/3276111900] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.fathlc{-1:14427} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/1912978174,v1:192.168.122.100:6807/1912978174] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.mmswte{-1:24170} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/664164116,v1:192.168.122.101:6805/664164116] compat {c=[1],r=[1],i=[1fff]}]
Dec 15 10:38:44 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/664164116,v1:192.168.122.101:6805/664164116] up:standby
Dec 15 10:38:44 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.mhljub=up:active} 2 up:standby
Dec 15 10:38:45 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v38: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 176 KiB/s rd, 5.5 KiB/s wr, 325 op/s
Dec 15 10:38:45 compute-0 ceph-mon[74356]: 6.11 scrub starts
Dec 15 10:38:45 compute-0 ceph-mon[74356]: 6.11 scrub ok
Dec 15 10:38:45 compute-0 ceph-mon[74356]: mds.? [v2:192.168.122.101:6804/664164116,v1:192.168.122.101:6805/664164116] up:standby
Dec 15 10:38:45 compute-0 ceph-mon[74356]: fsmap cephfs:1 {0=cephfs.compute-2.mhljub=up:active} 2 up:standby
Dec 15 10:38:45 compute-0 ceph-mon[74356]: 6.3 scrub starts
Dec 15 10:38:45 compute-0 ceph-mon[74356]: 6.3 scrub ok
Dec 15 10:38:45 compute-0 ceph-mgr[74651]: [progress INFO root] Writing back 14 completed events
Dec 15 10:38:45 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 15 10:38:45 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:45 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event d3683f48-d68c-4091-ba41-8eefa4e2767d (Global Recovery Event) in 5 seconds
Dec 15 10:38:45 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Dec 15 10:38:45 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Dec 15 10:38:45 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec 15 10:38:45 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 15 10:38:45 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 15 10:38:45 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec 15 10:38:45 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec 15 10:38:45 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.uezrcp-rgw
Dec 15 10:38:45 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.uezrcp-rgw
Dec 15 10:38:45 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.uezrcp-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 15 10:38:45 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.uezrcp-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 15 10:38:45 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.uezrcp-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 15 10:38:45 compute-0 ceph-mgr[74651]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.uezrcp's ganesha conf is defaulting to empty
Dec 15 10:38:45 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.uezrcp's ganesha conf is defaulting to empty
Dec 15 10:38:45 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:38:45 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:45 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.uezrcp on compute-2
Dec 15 10:38:45 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.uezrcp on compute-2
Dec 15 10:38:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:38:46 compute-0 ceph-mon[74356]: 6.10 scrub starts
Dec 15 10:38:46 compute-0 ceph-mon[74356]: 6.10 scrub ok
Dec 15 10:38:46 compute-0 ceph-mon[74356]: pgmap v38: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 176 KiB/s rd, 5.5 KiB/s wr, 325 op/s
Dec 15 10:38:46 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:46 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 15 10:38:46 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 15 10:38:46 compute-0 ceph-mon[74356]: 6.19 deep-scrub starts
Dec 15 10:38:46 compute-0 ceph-mon[74356]: 6.19 deep-scrub ok
Dec 15 10:38:46 compute-0 ceph-mon[74356]: Rados config object exists: conf-nfs.cephfs
Dec 15 10:38:46 compute-0 ceph-mon[74356]: Creating key for client.nfs.cephfs.1.0.compute-2.uezrcp-rgw
Dec 15 10:38:46 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.uezrcp-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 15 10:38:46 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.uezrcp-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 15 10:38:46 compute-0 ceph-mon[74356]: Bind address in nfs.cephfs.1.0.compute-2.uezrcp's ganesha conf is defaulting to empty
Dec 15 10:38:46 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:46 compute-0 ceph-mon[74356]: Deploying daemon nfs.cephfs.1.0.compute-2.uezrcp on compute-2
Dec 15 10:38:46 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Dec 15 10:38:46 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Dec 15 10:38:47 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v39: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 151 KiB/s rd, 4.6 KiB/s wr, 278 op/s
Dec 15 10:38:47 compute-0 ceph-mon[74356]: 6.13 scrub starts
Dec 15 10:38:47 compute-0 ceph-mon[74356]: 6.13 scrub ok
Dec 15 10:38:47 compute-0 ceph-mon[74356]: 6.1a scrub starts
Dec 15 10:38:47 compute-0 ceph-mon[74356]: 6.1a scrub ok
Dec 15 10:38:47 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Dec 15 10:38:47 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Dec 15 10:38:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:38:47 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:38:47 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 15 10:38:47 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:47 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.stewbo
Dec 15 10:38:47 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.stewbo
Dec 15 10:38:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.stewbo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec 15 10:38:47 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.stewbo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 15 10:38:47 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.stewbo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 15 10:38:47 compute-0 ceph-mgr[74651]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec 15 10:38:47 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec 15 10:38:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec 15 10:38:47 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 15 10:38:47 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 15 10:38:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:38:47 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec 15 10:38:47 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 15 10:38:48 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 15 10:38:48 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec 15 10:38:48 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec 15 10:38:48 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.stewbo-rgw
Dec 15 10:38:48 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.stewbo-rgw
Dec 15 10:38:48 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.stewbo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 15 10:38:48 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.stewbo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 15 10:38:48 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.stewbo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 15 10:38:48 compute-0 ceph-mgr[74651]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.stewbo's ganesha conf is defaulting to empty
Dec 15 10:38:48 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.stewbo's ganesha conf is defaulting to empty
Dec 15 10:38:48 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:38:48 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:48 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.stewbo on compute-0
Dec 15 10:38:48 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.stewbo on compute-0
Dec 15 10:38:48 compute-0 sudo[94292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:38:48 compute-0 sudo[94292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:48 compute-0 sudo[94292]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:48 compute-0 sudo[94317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:38:48 compute-0 sudo[94317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:48 compute-0 ceph-mon[74356]: 7.13 scrub starts
Dec 15 10:38:48 compute-0 ceph-mon[74356]: 7.13 scrub ok
Dec 15 10:38:48 compute-0 ceph-mon[74356]: pgmap v39: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 151 KiB/s rd, 4.6 KiB/s wr, 278 op/s
Dec 15 10:38:48 compute-0 ceph-mon[74356]: 6.1d scrub starts
Dec 15 10:38:48 compute-0 ceph-mon[74356]: 6.1d scrub ok
Dec 15 10:38:48 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:48 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:48 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:48 compute-0 ceph-mon[74356]: Creating key for client.nfs.cephfs.2.0.compute-0.stewbo
Dec 15 10:38:48 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.stewbo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 15 10:38:48 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.stewbo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 15 10:38:48 compute-0 ceph-mon[74356]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec 15 10:38:48 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 15 10:38:48 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 15 10:38:48 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:48 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 15 10:38:48 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 15 10:38:48 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.stewbo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 15 10:38:48 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.stewbo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 15 10:38:48 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:38:48 compute-0 podman[94381]: 2025-12-15 10:38:48.664615134 +0000 UTC m=+0.049528194 container create e228301adf61286fc4a563fc128c1a294cf1dc192eb86cb731b0e1ca84e18780 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_wilbur, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:38:48 compute-0 systemd[1]: Started libpod-conmon-e228301adf61286fc4a563fc128c1a294cf1dc192eb86cb731b0e1ca84e18780.scope.
Dec 15 10:38:48 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:38:48 compute-0 podman[94381]: 2025-12-15 10:38:48.732790878 +0000 UTC m=+0.117703958 container init e228301adf61286fc4a563fc128c1a294cf1dc192eb86cb731b0e1ca84e18780 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_wilbur, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:38:48 compute-0 podman[94381]: 2025-12-15 10:38:48.641755121 +0000 UTC m=+0.026668211 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:38:48 compute-0 podman[94381]: 2025-12-15 10:38:48.737901177 +0000 UTC m=+0.122814267 container start e228301adf61286fc4a563fc128c1a294cf1dc192eb86cb731b0e1ca84e18780 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_wilbur, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 15 10:38:48 compute-0 amazing_wilbur[94397]: 167 167
Dec 15 10:38:48 compute-0 systemd[1]: libpod-e228301adf61286fc4a563fc128c1a294cf1dc192eb86cb731b0e1ca84e18780.scope: Deactivated successfully.
Dec 15 10:38:48 compute-0 podman[94381]: 2025-12-15 10:38:48.742523991 +0000 UTC m=+0.127437081 container attach e228301adf61286fc4a563fc128c1a294cf1dc192eb86cb731b0e1ca84e18780 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:38:48 compute-0 podman[94381]: 2025-12-15 10:38:48.742835251 +0000 UTC m=+0.127748311 container died e228301adf61286fc4a563fc128c1a294cf1dc192eb86cb731b0e1ca84e18780 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_wilbur, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True)
Dec 15 10:38:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-a44846e0a185d5e21967239d038d9947dfbe1849f89a7a51dd334e7f2f62c395-merged.mount: Deactivated successfully.
Dec 15 10:38:48 compute-0 podman[94381]: 2025-12-15 10:38:48.778992507 +0000 UTC m=+0.163905587 container remove e228301adf61286fc4a563fc128c1a294cf1dc192eb86cb731b0e1ca84e18780 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 15 10:38:48 compute-0 systemd[1]: libpod-conmon-e228301adf61286fc4a563fc128c1a294cf1dc192eb86cb731b0e1ca84e18780.scope: Deactivated successfully.
Dec 15 10:38:48 compute-0 systemd[1]: Reloading.
Dec 15 10:38:48 compute-0 systemd-rc-local-generator[94442]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:38:48 compute-0 systemd-sysv-generator[94445]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:38:49 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v40: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 137 KiB/s rd, 4.2 KiB/s wr, 252 op/s
Dec 15 10:38:49 compute-0 systemd[1]: Reloading.
Dec 15 10:38:49 compute-0 systemd-rc-local-generator[94481]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:38:49 compute-0 systemd-sysv-generator[94484]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:38:49 compute-0 ceph-mon[74356]: Rados config object exists: conf-nfs.cephfs
Dec 15 10:38:49 compute-0 ceph-mon[74356]: Creating key for client.nfs.cephfs.2.0.compute-0.stewbo-rgw
Dec 15 10:38:49 compute-0 ceph-mon[74356]: Bind address in nfs.cephfs.2.0.compute-0.stewbo's ganesha conf is defaulting to empty
Dec 15 10:38:49 compute-0 ceph-mon[74356]: Deploying daemon nfs.cephfs.2.0.compute-0.stewbo on compute-0
Dec 15 10:38:49 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.stewbo for 77365f67-614e-5a8d-b658-640395550c79...
Dec 15 10:38:49 compute-0 podman[94538]: 2025-12-15 10:38:49.618769791 +0000 UTC m=+0.035373023 container create c0f38bc539f687c54b7eb01f058f3691b3a24f961ff23e54889334c42825f1f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 15 10:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d64c11962c4d24acb55c711b7951cfc5a722fd016555c0855bc604aff422ece3/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d64c11962c4d24acb55c711b7951cfc5a722fd016555c0855bc604aff422ece3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d64c11962c4d24acb55c711b7951cfc5a722fd016555c0855bc604aff422ece3/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d64c11962c4d24acb55c711b7951cfc5a722fd016555c0855bc604aff422ece3/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.stewbo-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:49 compute-0 podman[94538]: 2025-12-15 10:38:49.682071134 +0000 UTC m=+0.098674386 container init c0f38bc539f687c54b7eb01f058f3691b3a24f961ff23e54889334c42825f1f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:38:49 compute-0 podman[94538]: 2025-12-15 10:38:49.686518812 +0000 UTC m=+0.103122044 container start c0f38bc539f687c54b7eb01f058f3691b3a24f961ff23e54889334c42825f1f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 15 10:38:49 compute-0 bash[94538]: c0f38bc539f687c54b7eb01f058f3691b3a24f961ff23e54889334c42825f1f4
Dec 15 10:38:49 compute-0 podman[94538]: 2025-12-15 10:38:49.603698802 +0000 UTC m=+0.020302054 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:38:49 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.stewbo for 77365f67-614e-5a8d-b658-640395550c79.
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 15 10:38:49 compute-0 sudo[94317]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 15 10:38:49 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 15 10:38:49 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:49 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 15 10:38:49 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:49 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 15 10:38:49 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:49 compute-0 ceph-mgr[74651]: [progress INFO root] complete: finished ev f98947ca-ddb4-457f-af3e-abf764889564 (Updating nfs.cephfs deployment (+3 -> 3))
Dec 15 10:38:49 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event f98947ca-ddb4-457f-af3e-abf764889564 (Updating nfs.cephfs deployment (+3 -> 3)) in 9 seconds
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 15 10:38:49 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 15 10:38:49 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:49 compute-0 ceph-mgr[74651]: [progress INFO root] update: starting ev b9d61420-ae7a-406d-b6e2-457dede4f51c (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Dec 15 10:38:49 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 15 10:38:49 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:49 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.nuxuso on compute-1
Dec 15 10:38:49 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.nuxuso on compute-1
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 15 10:38:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 15 10:38:50 compute-0 ceph-mgr[74651]: [progress INFO root] Writing back 16 completed events
Dec 15 10:38:50 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 15 10:38:50 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:50 compute-0 ceph-mon[74356]: pgmap v40: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 137 KiB/s rd, 4.2 KiB/s wr, 252 op/s
Dec 15 10:38:50 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:50 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:50 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:50 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:50 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:50 compute-0 ceph-mon[74356]: Deploying daemon haproxy.nfs.cephfs.compute-1.nuxuso on compute-1
Dec 15 10:38:50 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:51 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v41: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 3.6 KiB/s wr, 217 op/s
Dec 15 10:38:51 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:38:52 compute-0 ceph-mon[74356]: pgmap v41: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 3.6 KiB/s wr, 217 op/s
Dec 15 10:38:53 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v42: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 124 KiB/s rd, 5.4 KiB/s wr, 224 op/s
Dec 15 10:38:53 compute-0 sudo[94631]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xejhkdsxeiveemxjyubppkopsmymvlcu ; /usr/bin/python3'
Dec 15 10:38:53 compute-0 sudo[94631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:38:53 compute-0 python3[94633]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:38:53 compute-0 podman[94634]: 2025-12-15 10:38:53.526719296 +0000 UTC m=+0.050464064 container create 11449e3bd7ef1774ce8e0b0ed573c5d40856cfd3d0278f3dbcf94dc6f416bfd9 (image=quay.io/ceph/ceph:v19, name=friendly_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 15 10:38:53 compute-0 systemd[1]: Started libpod-conmon-11449e3bd7ef1774ce8e0b0ed573c5d40856cfd3d0278f3dbcf94dc6f416bfd9.scope.
Dec 15 10:38:53 compute-0 podman[94634]: 2025-12-15 10:38:53.503792631 +0000 UTC m=+0.027537429 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:38:53 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/563a9418d685275bbf639035719ddda33690eb1022d173c76aef3b342882c5be/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/563a9418d685275bbf639035719ddda33690eb1022d173c76aef3b342882c5be/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:53 compute-0 podman[94634]: 2025-12-15 10:38:53.631578163 +0000 UTC m=+0.155322961 container init 11449e3bd7ef1774ce8e0b0ed573c5d40856cfd3d0278f3dbcf94dc6f416bfd9 (image=quay.io/ceph/ceph:v19, name=friendly_mcclintock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:38:53 compute-0 podman[94634]: 2025-12-15 10:38:53.640498151 +0000 UTC m=+0.164242919 container start 11449e3bd7ef1774ce8e0b0ed573c5d40856cfd3d0278f3dbcf94dc6f416bfd9 (image=quay.io/ceph/ceph:v19, name=friendly_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Dec 15 10:38:53 compute-0 podman[94634]: 2025-12-15 10:38:53.644067461 +0000 UTC m=+0.167812349 container attach 11449e3bd7ef1774ce8e0b0ed573c5d40856cfd3d0278f3dbcf94dc6f416bfd9 (image=quay.io/ceph/ceph:v19, name=friendly_mcclintock, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 15 10:38:53 compute-0 friendly_mcclintock[94650]: ERROR: invalid flag --daemon-type
Dec 15 10:38:53 compute-0 systemd[1]: libpod-11449e3bd7ef1774ce8e0b0ed573c5d40856cfd3d0278f3dbcf94dc6f416bfd9.scope: Deactivated successfully.
Dec 15 10:38:53 compute-0 podman[94634]: 2025-12-15 10:38:53.695459093 +0000 UTC m=+0.219203861 container died 11449e3bd7ef1774ce8e0b0ed573c5d40856cfd3d0278f3dbcf94dc6f416bfd9 (image=quay.io/ceph/ceph:v19, name=friendly_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:38:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-563a9418d685275bbf639035719ddda33690eb1022d173c76aef3b342882c5be-merged.mount: Deactivated successfully.
Dec 15 10:38:53 compute-0 podman[94634]: 2025-12-15 10:38:53.876751351 +0000 UTC m=+0.400496139 container remove 11449e3bd7ef1774ce8e0b0ed573c5d40856cfd3d0278f3dbcf94dc6f416bfd9 (image=quay.io/ceph/ceph:v19, name=friendly_mcclintock, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 15 10:38:53 compute-0 systemd[1]: libpod-conmon-11449e3bd7ef1774ce8e0b0ed573c5d40856cfd3d0278f3dbcf94dc6f416bfd9.scope: Deactivated successfully.
Dec 15 10:38:53 compute-0 sudo[94631]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:54 compute-0 ceph-mon[74356]: pgmap v42: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 124 KiB/s rd, 5.4 KiB/s wr, 224 op/s
Dec 15 10:38:55 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v43: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 1.7 KiB/s wr, 7 op/s
Dec 15 10:38:55 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:38:55 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:55 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:38:55 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:55 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 15 10:38:55 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:55 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.ykblqa on compute-0
Dec 15 10:38:55 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.ykblqa on compute-0
Dec 15 10:38:55 compute-0 sudo[94683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:38:55 compute-0 sudo[94683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:55 compute-0 sudo[94683]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:55 compute-0 sudo[94708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:38:55 compute-0 sudo[94708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:38:56 compute-0 ceph-mon[74356]: pgmap v43: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 1.7 KiB/s wr, 7 op/s
Dec 15 10:38:56 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:56 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:56 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:56 compute-0 ceph-mon[74356]: Deploying daemon haproxy.nfs.cephfs.compute-0.ykblqa on compute-0
Dec 15 10:38:56 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:38:56 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:56 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:38:57 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v44: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 1.7 KiB/s wr, 7 op/s
Dec 15 10:38:57 compute-0 podman[94774]: 2025-12-15 10:38:57.901478975 +0000 UTC m=+2.268705545 container create aabb6b8b2be62ad400bbbee2a9d77ca65998ca5b0a50312df9bc055b94d28187 (image=quay.io/ceph/haproxy:2.3, name=modest_benz)
Dec 15 10:38:57 compute-0 systemd[1]: Started libpod-conmon-aabb6b8b2be62ad400bbbee2a9d77ca65998ca5b0a50312df9bc055b94d28187.scope.
Dec 15 10:38:57 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:38:57 compute-0 podman[94774]: 2025-12-15 10:38:57.886972782 +0000 UTC m=+2.254199382 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 15 10:38:57 compute-0 podman[94774]: 2025-12-15 10:38:57.961665709 +0000 UTC m=+2.328892279 container init aabb6b8b2be62ad400bbbee2a9d77ca65998ca5b0a50312df9bc055b94d28187 (image=quay.io/ceph/haproxy:2.3, name=modest_benz)
Dec 15 10:38:57 compute-0 podman[94774]: 2025-12-15 10:38:57.967336416 +0000 UTC m=+2.334562986 container start aabb6b8b2be62ad400bbbee2a9d77ca65998ca5b0a50312df9bc055b94d28187 (image=quay.io/ceph/haproxy:2.3, name=modest_benz)
Dec 15 10:38:57 compute-0 podman[94774]: 2025-12-15 10:38:57.970950139 +0000 UTC m=+2.338176719 container attach aabb6b8b2be62ad400bbbee2a9d77ca65998ca5b0a50312df9bc055b94d28187 (image=quay.io/ceph/haproxy:2.3, name=modest_benz)
Dec 15 10:38:57 compute-0 modest_benz[94890]: 0 0
Dec 15 10:38:57 compute-0 systemd[1]: libpod-aabb6b8b2be62ad400bbbee2a9d77ca65998ca5b0a50312df9bc055b94d28187.scope: Deactivated successfully.
Dec 15 10:38:57 compute-0 podman[94774]: 2025-12-15 10:38:57.971707123 +0000 UTC m=+2.338933693 container died aabb6b8b2be62ad400bbbee2a9d77ca65998ca5b0a50312df9bc055b94d28187 (image=quay.io/ceph/haproxy:2.3, name=modest_benz)
Dec 15 10:38:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fcf5f056f88e437eaff99360f5f84fe0904b8a028d9999aeb7dc6c1289e4f3d-merged.mount: Deactivated successfully.
Dec 15 10:38:58 compute-0 podman[94774]: 2025-12-15 10:38:58.004093681 +0000 UTC m=+2.371320251 container remove aabb6b8b2be62ad400bbbee2a9d77ca65998ca5b0a50312df9bc055b94d28187 (image=quay.io/ceph/haproxy:2.3, name=modest_benz)
Dec 15 10:38:58 compute-0 systemd[1]: libpod-conmon-aabb6b8b2be62ad400bbbee2a9d77ca65998ca5b0a50312df9bc055b94d28187.scope: Deactivated successfully.
Dec 15 10:38:58 compute-0 systemd[1]: Reloading.
Dec 15 10:38:58 compute-0 ceph-mon[74356]: pgmap v44: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 1.7 KiB/s wr, 7 op/s
Dec 15 10:38:58 compute-0 systemd-sysv-generator[94939]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:38:58 compute-0 systemd-rc-local-generator[94933]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:38:58 compute-0 systemd[1]: Reloading.
Dec 15 10:38:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:38:58 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e4001970 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:38:58 compute-0 systemd-rc-local-generator[94977]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:38:58 compute-0 systemd-sysv-generator[94981]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:38:58 compute-0 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.ykblqa for 77365f67-614e-5a8d-b658-640395550c79...
Dec 15 10:38:58 compute-0 podman[95032]: 2025-12-15 10:38:58.837533327 +0000 UTC m=+0.050051910 container create 55276bcc9605a59cf148bdf11fc4eb753ca401193f2349bda31c6714ea73d19e (image=quay.io/ceph/haproxy:2.3, name=ceph-77365f67-614e-5a8d-b658-640395550c79-haproxy-nfs-cephfs-compute-0-ykblqa)
Dec 15 10:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6db56073f38faf9e1b054be6fa1a9a29dc4991dcd1efd9e938ea3db0aff2d907/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Dec 15 10:38:58 compute-0 podman[95032]: 2025-12-15 10:38:58.887812014 +0000 UTC m=+0.100330607 container init 55276bcc9605a59cf148bdf11fc4eb753ca401193f2349bda31c6714ea73d19e (image=quay.io/ceph/haproxy:2.3, name=ceph-77365f67-614e-5a8d-b658-640395550c79-haproxy-nfs-cephfs-compute-0-ykblqa)
Dec 15 10:38:58 compute-0 podman[95032]: 2025-12-15 10:38:58.892314534 +0000 UTC m=+0.104833107 container start 55276bcc9605a59cf148bdf11fc4eb753ca401193f2349bda31c6714ea73d19e (image=quay.io/ceph/haproxy:2.3, name=ceph-77365f67-614e-5a8d-b658-640395550c79-haproxy-nfs-cephfs-compute-0-ykblqa)
Dec 15 10:38:58 compute-0 bash[95032]: 55276bcc9605a59cf148bdf11fc4eb753ca401193f2349bda31c6714ea73d19e
Dec 15 10:38:58 compute-0 podman[95032]: 2025-12-15 10:38:58.811407994 +0000 UTC m=+0.023926607 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 15 10:38:58 compute-0 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.ykblqa for 77365f67-614e-5a8d-b658-640395550c79.
Dec 15 10:38:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-haproxy-nfs-cephfs-compute-0-ykblqa[95047]: [NOTICE] 348/103858 (2) : New worker #1 (4) forked
Dec 15 10:38:58 compute-0 sudo[94708]: pam_unix(sudo:session): session closed for user root
Dec 15 10:38:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:38:58 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:38:58 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 15 10:38:58 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:38:58 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.vexzcb on compute-2
Dec 15 10:38:58 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.vexzcb on compute-2
Dec 15 10:38:59 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v45: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 15 10:39:00 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:00 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:00 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:00 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc000d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:00 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:00 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:00 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:01 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v46: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 15 10:39:01 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:39:01 compute-0 ceph-mon[74356]: Deploying daemon haproxy.nfs.cephfs.compute-2.vexzcb on compute-2
Dec 15 10:39:01 compute-0 ceph-mon[74356]: pgmap v45: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 15 10:39:02 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:02 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4001bd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:02 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:02 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e4002290 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:02 compute-0 ceph-mon[74356]: pgmap v46: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 15 10:39:03 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v47: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 15 10:39:03 compute-0 sudo[95084]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wglzqbyfzkilnpvoaoyknoijxaztdpxy ; /usr/bin/python3'
Dec 15 10:39:04 compute-0 sudo[95084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:39:04 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:39:04 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:04 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d00016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:04 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:04 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:39:04 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:04 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 15 10:39:04 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:04 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Dec 15 10:39:04 compute-0 ceph-mon[74356]: pgmap v47: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 15 10:39:04 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:04 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:04 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:04 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 15 10:39:04 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 15 10:39:04 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 15 10:39:04 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 15 10:39:04 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 15 10:39:04 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 15 10:39:04 compute-0 python3[95086]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:39:04 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.kwlgeh on compute-1
Dec 15 10:39:04 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.kwlgeh on compute-1
Dec 15 10:39:04 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:04 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:04 compute-0 podman[95087]: 2025-12-15 10:39:04.395955063 +0000 UTC m=+0.048794621 container create e4a20bbb5fab431963e65475ab7bf6f28726f91f9a16e7ea74f84f5c41736ca8 (image=quay.io/ceph/ceph:v19, name=nifty_lederberg, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 15 10:39:04 compute-0 systemd[1]: Started libpod-conmon-e4a20bbb5fab431963e65475ab7bf6f28726f91f9a16e7ea74f84f5c41736ca8.scope.
Dec 15 10:39:04 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:39:04 compute-0 podman[95087]: 2025-12-15 10:39:04.374560216 +0000 UTC m=+0.027399764 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:39:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/533f9eeb4386d4509fbf57cc3b70fd586a9d1c38a60c062bee67678f58cb8fd3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:39:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/533f9eeb4386d4509fbf57cc3b70fd586a9d1c38a60c062bee67678f58cb8fd3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:39:04 compute-0 podman[95087]: 2025-12-15 10:39:04.496247198 +0000 UTC m=+0.149086756 container init e4a20bbb5fab431963e65475ab7bf6f28726f91f9a16e7ea74f84f5c41736ca8 (image=quay.io/ceph/ceph:v19, name=nifty_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 15 10:39:04 compute-0 podman[95087]: 2025-12-15 10:39:04.503277546 +0000 UTC m=+0.156117074 container start e4a20bbb5fab431963e65475ab7bf6f28726f91f9a16e7ea74f84f5c41736ca8 (image=quay.io/ceph/ceph:v19, name=nifty_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:39:04 compute-0 podman[95087]: 2025-12-15 10:39:04.506501727 +0000 UTC m=+0.159341275 container attach e4a20bbb5fab431963e65475ab7bf6f28726f91f9a16e7ea74f84f5c41736ca8 (image=quay.io/ceph/ceph:v19, name=nifty_lederberg, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 15 10:39:04 compute-0 nifty_lederberg[95102]: ERROR: invalid flag --daemon-type
Dec 15 10:39:04 compute-0 systemd[1]: libpod-e4a20bbb5fab431963e65475ab7bf6f28726f91f9a16e7ea74f84f5c41736ca8.scope: Deactivated successfully.
Dec 15 10:39:04 compute-0 podman[95087]: 2025-12-15 10:39:04.568302052 +0000 UTC m=+0.221141580 container died e4a20bbb5fab431963e65475ab7bf6f28726f91f9a16e7ea74f84f5c41736ca8 (image=quay.io/ceph/ceph:v19, name=nifty_lederberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 15 10:39:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-533f9eeb4386d4509fbf57cc3b70fd586a9d1c38a60c062bee67678f58cb8fd3-merged.mount: Deactivated successfully.
Dec 15 10:39:04 compute-0 podman[95087]: 2025-12-15 10:39:04.604796859 +0000 UTC m=+0.257636427 container remove e4a20bbb5fab431963e65475ab7bf6f28726f91f9a16e7ea74f84f5c41736ca8 (image=quay.io/ceph/ceph:v19, name=nifty_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 15 10:39:04 compute-0 systemd[1]: libpod-conmon-e4a20bbb5fab431963e65475ab7bf6f28726f91f9a16e7ea74f84f5c41736ca8.scope: Deactivated successfully.
Dec 15 10:39:04 compute-0 sudo[95084]: pam_unix(sudo:session): session closed for user root
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [balancer INFO root] Optimize plan auto_2025-12-15_10:39:05
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [balancer INFO root] do_upmap
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'volumes', 'images', 'cephfs.cephfs.meta', 'vms', '.nfs', 'backups', '.mgr', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta']
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v48: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Dec 15 10:39:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Dec 15 10:39:05 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:39:05 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:05 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e4002290 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Dec 15 10:39:05 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:05 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:05 compute-0 ceph-mon[74356]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 15 10:39:05 compute-0 ceph-mon[74356]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 15 10:39:05 compute-0 ceph-mon[74356]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 15 10:39:05 compute-0 ceph-mon[74356]: Deploying daemon keepalived.nfs.cephfs.compute-1.kwlgeh on compute-1
Dec 15 10:39:05 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Dec 15 10:39:05 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Dec 15 10:39:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Dec 15 10:39:05 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [progress INFO root] update: starting ev 39dbc5f7-311b-4195-bda0-f9777eaf64b0 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 15 10:39:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Dec 15 10:39:05 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 15 10:39:05 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 15 10:39:06 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:06 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4001bd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:39:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Dec 15 10:39:06 compute-0 ceph-mon[74356]: pgmap v48: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:39:06 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Dec 15 10:39:06 compute-0 ceph-mon[74356]: osdmap e55: 3 total, 3 up, 3 in
Dec 15 10:39:06 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec 15 10:39:06 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 15 10:39:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Dec 15 10:39:06 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Dec 15 10:39:06 compute-0 ceph-mgr[74651]: [progress INFO root] update: starting ev 099f677f-d9df-48d3-b39c-3f4f9957ae5e (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 15 10:39:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Dec 15 10:39:06 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec 15 10:39:06 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:06 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d00016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:07 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v51: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Dec 15 10:39:07 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Dec 15 10:39:07 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 15 10:39:07 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Dec 15 10:39:07 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 15 10:39:07 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:07 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:07 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Dec 15 10:39:07 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 15 10:39:07 compute-0 ceph-mon[74356]: osdmap e56: 3 total, 3 up, 3 in
Dec 15 10:39:07 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec 15 10:39:07 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 15 10:39:07 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 15 10:39:07 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 15 10:39:07 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Dec 15 10:39:07 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 15 10:39:07 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Dec 15 10:39:07 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Dec 15 10:39:07 compute-0 ceph-mgr[74651]: [progress INFO root] update: starting ev 396f3050-37d8-44cb-94dc-772f893628b5 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 15 10:39:07 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Dec 15 10:39:07 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec 15 10:39:07 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 57 pg[9.0( v 46'6 (0'0,46'6] local-lis/les=45/46 n=6 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=11.168114662s) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 46'5 mlcod 46'5 active pruub 158.242935181s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:07 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 57 pg[8.0( v 54'45 (0'0,54'45] local-lis/les=41/42 n=5 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=57 pruub=15.003527641s) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 lcod 54'44 mlcod 54'44 active pruub 162.078475952s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:07 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 57 pg[9.0( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=11.168114662s) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 46'5 mlcod 0'0 unknown pruub 158.242935181s@ mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:07 compute-0 ceph-osd[82838]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55b8a46c4fc0) operator()   moving buffer(0x55b8a32d8028 space 0x55b8a32ee0e0 0x0~1000 clean)
Dec 15 10:39:07 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 57 pg[8.0( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=0 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=57 pruub=15.003527641s) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 lcod 54'44 mlcod 0'0 unknown pruub 162.078475952s@ mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:07 compute-0 ceph-osd[82838]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55b8a46c4fc0) operator()   moving buffer(0x55b8a2fc2f28 space 0x55b8a32ef600 0x0~1000 clean)
Dec 15 10:39:07 compute-0 ceph-osd[82838]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55b8a46c4fc0) operator()   moving buffer(0x55b8a32b9928 space 0x55b8a3150350 0x0~1000 clean)
Dec 15 10:39:07 compute-0 ceph-osd[82838]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55b8a46c4fc0) operator()   moving buffer(0x55b8a32d8528 space 0x55b8a32eed10 0x0~1000 clean)
Dec 15 10:39:08 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:08 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e4002290 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:08 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:08 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40089d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:08 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Dec 15 10:39:08 compute-0 ceph-mon[74356]: pgmap v51: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Dec 15 10:39:08 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 15 10:39:08 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Dec 15 10:39:08 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 15 10:39:08 compute-0 ceph-mon[74356]: osdmap e57: 3 total, 3 up, 3 in
Dec 15 10:39:08 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec 15 10:39:08 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 15 10:39:08 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Dec 15 10:39:08 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Dec 15 10:39:08 compute-0 ceph-mgr[74651]: [progress INFO root] update: starting ev 986150c9-61bd-491d-9975-974834f515a5 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 15 10:39:08 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Dec 15 10:39:08 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.15( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.14( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.14( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.17( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.16( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.17( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.16( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.11( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.10( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.10( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.11( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.2( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=1 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.3( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=1 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.2( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=1 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.3( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=1 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.f( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.9( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.e( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.8( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.8( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.b( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.9( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.a( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.f( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.e( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.c( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.d( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.d( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.c( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.a( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.b( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.1( v 54'45 (0'0,54'45] local-lis/les=41/42 n=1 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.1( v 46'6 (0'0,46'6] local-lis/les=45/46 n=1 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.7( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.6( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=1 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.6( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.4( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=1 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.7( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.5( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=1 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.5( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=1 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.4( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=1 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.1a( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.1a( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.1b( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.1b( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.18( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.19( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.18( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.19( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.1e( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.1f( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.1e( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.1f( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.1c( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.1d( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.1c( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.1d( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.12( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.13( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.15( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.12( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.13( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.15( v 54'45 lc 0'0 (0'0,54'45] local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.17( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.14( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.14( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.16( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.17( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.11( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.16( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.10( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.11( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.2( v 54'45 (0'0,54'45] local-lis/les=57/58 n=1 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.10( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.2( v 46'6 (0'0,46'6] local-lis/les=57/58 n=1 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.3( v 46'6 (0'0,46'6] local-lis/les=57/58 n=1 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.3( v 54'45 (0'0,54'45] local-lis/les=57/58 n=1 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.f( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.8( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.e( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.b( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.8( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.9( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.f( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.a( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.e( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.9( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.d( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.a( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.c( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.b( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.d( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.1( v 54'45 (0'0,54'45] local-lis/les=57/58 n=1 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.1( v 46'6 (0'0,46'6] local-lis/les=57/58 n=1 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.c( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.6( v 46'6 (0'0,46'6] local-lis/les=57/58 n=1 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.7( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.0( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 54'44 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.6( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.7( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.0( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 46'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.5( v 46'6 (0'0,46'6] local-lis/les=57/58 n=1 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.4( v 46'6 (0'0,46'6] local-lis/les=57/58 n=1 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.5( v 54'45 (0'0,54'45] local-lis/les=57/58 n=1 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.4( v 54'45 (0'0,54'45] local-lis/les=57/58 n=1 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.1a( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.1b( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.1a( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.18( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.1b( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.19( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.18( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.1e( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.19( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.1f( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.1e( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.1c( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.1f( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.1c( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.1d( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.1d( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.13( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.12( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.12( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[9.13( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 58 pg[8.15( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=54'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:08 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Dec 15 10:39:08 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Dec 15 10:39:09 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v54: 260 pgs: 62 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:39:09 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Dec 15 10:39:09 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 15 10:39:09 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Dec 15 10:39:09 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 15 10:39:09 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:09 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d00016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:09 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Dec 15 10:39:09 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 15 10:39:09 compute-0 ceph-mon[74356]: osdmap e58: 3 total, 3 up, 3 in
Dec 15 10:39:09 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 15 10:39:09 compute-0 ceph-mon[74356]: 9.15 scrub starts
Dec 15 10:39:09 compute-0 ceph-mon[74356]: 9.15 scrub ok
Dec 15 10:39:09 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 15 10:39:09 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 15 10:39:09 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 15 10:39:09 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 15 10:39:09 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 15 10:39:09 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Dec 15 10:39:09 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Dec 15 10:39:09 compute-0 ceph-mgr[74651]: [progress INFO root] update: starting ev f1145736-16db-4b59-981b-2c238c85447d (PG autoscaler increasing pool 12 PGs from 1 to 32)
Dec 15 10:39:09 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 59 pg[11.0( v 50'48 (0'0,50'48] local-lis/les=49/50 n=8 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=13.175191879s) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 50'47 mlcod 50'47 active pruub 162.343292236s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:09 compute-0 ceph-mgr[74651]: [progress INFO root] complete: finished ev 39dbc5f7-311b-4195-bda0-f9777eaf64b0 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 15 10:39:09 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event 39dbc5f7-311b-4195-bda0-f9777eaf64b0 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 4 seconds
Dec 15 10:39:09 compute-0 ceph-mgr[74651]: [progress INFO root] complete: finished ev 099f677f-d9df-48d3-b39c-3f4f9957ae5e (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 15 10:39:09 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event 099f677f-d9df-48d3-b39c-3f4f9957ae5e (PG autoscaler increasing pool 9 PGs from 1 to 32) in 3 seconds
Dec 15 10:39:09 compute-0 ceph-mgr[74651]: [progress INFO root] complete: finished ev 396f3050-37d8-44cb-94dc-772f893628b5 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 15 10:39:09 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event 396f3050-37d8-44cb-94dc-772f893628b5 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 2 seconds
Dec 15 10:39:09 compute-0 ceph-mgr[74651]: [progress INFO root] complete: finished ev 986150c9-61bd-491d-9975-974834f515a5 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 15 10:39:09 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event 986150c9-61bd-491d-9975-974834f515a5 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 1 seconds
Dec 15 10:39:09 compute-0 ceph-mgr[74651]: [progress INFO root] complete: finished ev f1145736-16db-4b59-981b-2c238c85447d (PG autoscaler increasing pool 12 PGs from 1 to 32)
Dec 15 10:39:09 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event f1145736-16db-4b59-981b-2c238c85447d (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Dec 15 10:39:09 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 59 pg[11.0( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=13.175191879s) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 50'47 mlcod 0'0 unknown pruub 162.343292236s@ mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:09 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Dec 15 10:39:09 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Dec 15 10:39:10 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:10 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:10 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:10 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e4002290 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:10 compute-0 ceph-mgr[74651]: [progress INFO root] Writing back 21 completed events
Dec 15 10:39:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 15 10:39:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Dec 15 10:39:10 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Dec 15 10:39:10 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Dec 15 10:39:10 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:10 compute-0 ceph-mgr[74651]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Dec 15 10:39:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Dec 15 10:39:10 compute-0 ceph-mon[74356]: pgmap v54: 260 pgs: 62 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:39:10 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 15 10:39:10 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 15 10:39:10 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 15 10:39:10 compute-0 ceph-mon[74356]: osdmap e59: 3 total, 3 up, 3 in
Dec 15 10:39:10 compute-0 ceph-mon[74356]: 9.14 scrub starts
Dec 15 10:39:10 compute-0 ceph-mon[74356]: 9.14 scrub ok
Dec 15 10:39:10 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.17( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.16( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.15( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.14( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.13( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.12( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.c( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.b( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.a( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.1( v 50'48 (0'0,50'48] local-lis/les=49/50 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.d( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.9( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.e( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.f( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.8( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.2( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.3( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.4( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.5( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.6( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.7( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.18( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.19( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.1a( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.1b( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.1c( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.1d( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.1e( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.1f( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.10( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.11( v 50'48 lc 0'0 (0'0,50'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.17( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.16( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.13( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.14( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.12( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.0( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 50'47 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.15( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.c( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.b( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.a( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.1( v 50'48 (0'0,50'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.d( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.9( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.f( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.8( v 50'48 (0'0,50'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.2( v 50'48 (0'0,50'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.3( v 50'48 (0'0,50'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.4( v 50'48 (0'0,50'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.5( v 50'48 (0'0,50'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.6( v 50'48 (0'0,50'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.e( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.7( v 50'48 (0'0,50'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.18( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.1b( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.19( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.1d( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.1e( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.1a( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.1f( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.11( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.1c( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 60 pg[11.10( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:39:10 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:39:10 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 15 10:39:10 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:10 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 15 10:39:10 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 15 10:39:10 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 15 10:39:10 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 15 10:39:10 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 15 10:39:10 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 15 10:39:10 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.gdchmd on compute-0
Dec 15 10:39:10 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.gdchmd on compute-0
Dec 15 10:39:10 compute-0 sudo[95134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:39:10 compute-0 sudo[95134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:39:10 compute-0 sudo[95134]: pam_unix(sudo:session): session closed for user root
Dec 15 10:39:10 compute-0 sudo[95159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:39:10 compute-0 sudo[95159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:39:11 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v57: 322 pgs: 124 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:39:11 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec 15 10:39:11 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 15 10:39:11 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:11 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40089d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:11 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:39:11 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Dec 15 10:39:11 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Dec 15 10:39:11 compute-0 ceph-mon[74356]: 9.17 scrub starts
Dec 15 10:39:11 compute-0 ceph-mon[74356]: 9.17 scrub ok
Dec 15 10:39:11 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:11 compute-0 ceph-mon[74356]: osdmap e60: 3 total, 3 up, 3 in
Dec 15 10:39:11 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:11 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:11 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:11 compute-0 ceph-mon[74356]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 15 10:39:11 compute-0 ceph-mon[74356]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 15 10:39:11 compute-0 ceph-mon[74356]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 15 10:39:11 compute-0 ceph-mon[74356]: Deploying daemon keepalived.nfs.cephfs.compute-0.gdchmd on compute-0
Dec 15 10:39:11 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 15 10:39:11 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Dec 15 10:39:11 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 15 10:39:11 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Dec 15 10:39:11 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Dec 15 10:39:12 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:12 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:12 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:12 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:12 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Dec 15 10:39:12 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Dec 15 10:39:12 compute-0 ceph-mon[74356]: pgmap v57: 322 pgs: 124 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:39:12 compute-0 ceph-mon[74356]: 8.16 scrub starts
Dec 15 10:39:12 compute-0 ceph-mon[74356]: 8.16 scrub ok
Dec 15 10:39:12 compute-0 ceph-mon[74356]: 10.12 scrub starts
Dec 15 10:39:12 compute-0 ceph-mon[74356]: 10.12 scrub ok
Dec 15 10:39:12 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 15 10:39:12 compute-0 ceph-mon[74356]: osdmap e61: 3 total, 3 up, 3 in
Dec 15 10:39:12 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Dec 15 10:39:12 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Dec 15 10:39:12 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Dec 15 10:39:13 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v60: 353 pgs: 31 unknown, 322 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:39:13 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:13 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e4002290 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:13 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Dec 15 10:39:13 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Dec 15 10:39:13 compute-0 ceph-mon[74356]: 9.10 scrub starts
Dec 15 10:39:13 compute-0 ceph-mon[74356]: 9.10 scrub ok
Dec 15 10:39:13 compute-0 ceph-mon[74356]: 10.11 scrub starts
Dec 15 10:39:13 compute-0 ceph-mon[74356]: 10.11 scrub ok
Dec 15 10:39:13 compute-0 ceph-mon[74356]: osdmap e62: 3 total, 3 up, 3 in
Dec 15 10:39:14 compute-0 podman[95227]: 2025-12-15 10:39:14.121040694 +0000 UTC m=+2.742975090 container create a13e1c67d0d691b6a78a513ef0900b63106976d55249c7b1bd67a81f5b9ec60e (image=quay.io/ceph/keepalived:2.2.4, name=infallible_chatelet, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, name=keepalived, com.redhat.component=keepalived-container, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Dec 15 10:39:14 compute-0 systemd[1]: Started libpod-conmon-a13e1c67d0d691b6a78a513ef0900b63106976d55249c7b1bd67a81f5b9ec60e.scope.
Dec 15 10:39:14 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:39:14 compute-0 podman[95227]: 2025-12-15 10:39:14.192605713 +0000 UTC m=+2.814540149 container init a13e1c67d0d691b6a78a513ef0900b63106976d55249c7b1bd67a81f5b9ec60e (image=quay.io/ceph/keepalived:2.2.4, name=infallible_chatelet, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., architecture=x86_64, release=1793, build-date=2023-02-22T09:23:20, distribution-scope=public, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, vcs-type=git, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, com.redhat.component=keepalived-container, version=2.2.4)
Dec 15 10:39:14 compute-0 podman[95227]: 2025-12-15 10:39:14.198743904 +0000 UTC m=+2.820678290 container start a13e1c67d0d691b6a78a513ef0900b63106976d55249c7b1bd67a81f5b9ec60e (image=quay.io/ceph/keepalived:2.2.4, name=infallible_chatelet, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, architecture=x86_64, release=1793, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, com.redhat.component=keepalived-container, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 15 10:39:14 compute-0 podman[95227]: 2025-12-15 10:39:14.107145751 +0000 UTC m=+2.729080167 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec 15 10:39:14 compute-0 podman[95227]: 2025-12-15 10:39:14.202680887 +0000 UTC m=+2.824615333 container attach a13e1c67d0d691b6a78a513ef0900b63106976d55249c7b1bd67a81f5b9ec60e (image=quay.io/ceph/keepalived:2.2.4, name=infallible_chatelet, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, vcs-type=git, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, distribution-scope=public, name=keepalived, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9)
Dec 15 10:39:14 compute-0 infallible_chatelet[95320]: 0 0
Dec 15 10:39:14 compute-0 systemd[1]: libpod-a13e1c67d0d691b6a78a513ef0900b63106976d55249c7b1bd67a81f5b9ec60e.scope: Deactivated successfully.
Dec 15 10:39:14 compute-0 podman[95227]: 2025-12-15 10:39:14.245649077 +0000 UTC m=+2.867583473 container died a13e1c67d0d691b6a78a513ef0900b63106976d55249c7b1bd67a81f5b9ec60e (image=quay.io/ceph/keepalived:2.2.4, name=infallible_chatelet, name=keepalived, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, architecture=x86_64, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, vcs-type=git, distribution-scope=public)
Dec 15 10:39:14 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:14 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40089d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-df478ab83b0e30f9d18de7c9a6ae3a10c13efd45a9b300d67af3c1085c28dad9-merged.mount: Deactivated successfully.
Dec 15 10:39:14 compute-0 podman[95227]: 2025-12-15 10:39:14.279831061 +0000 UTC m=+2.901765457 container remove a13e1c67d0d691b6a78a513ef0900b63106976d55249c7b1bd67a81f5b9ec60e (image=quay.io/ceph/keepalived:2.2.4, name=infallible_chatelet, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, io.buildah.version=1.28.2, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, version=2.2.4, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.openshift.tags=Ceph keepalived, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 15 10:39:14 compute-0 systemd[1]: libpod-conmon-a13e1c67d0d691b6a78a513ef0900b63106976d55249c7b1bd67a81f5b9ec60e.scope: Deactivated successfully.
Dec 15 10:39:14 compute-0 systemd[1]: Reloading.
Dec 15 10:39:14 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:14 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:14 compute-0 systemd-rc-local-generator[95367]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:39:14 compute-0 systemd-sysv-generator[95373]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:39:14 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Dec 15 10:39:14 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Dec 15 10:39:14 compute-0 systemd[1]: Reloading.
Dec 15 10:39:14 compute-0 systemd-rc-local-generator[95413]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:39:14 compute-0 systemd-sysv-generator[95417]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:39:14 compute-0 sudo[95441]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dezmxvhauwhvqcqtkvsmclwvqpfllzpb ; /usr/bin/python3'
Dec 15 10:39:14 compute-0 sudo[95441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:39:14 compute-0 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.gdchmd for 77365f67-614e-5a8d-b658-640395550c79...
Dec 15 10:39:14 compute-0 ceph-mon[74356]: pgmap v60: 353 pgs: 31 unknown, 322 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:39:14 compute-0 ceph-mon[74356]: 8.14 scrub starts
Dec 15 10:39:14 compute-0 ceph-mon[74356]: 8.14 scrub ok
Dec 15 10:39:14 compute-0 ceph-mon[74356]: 10.10 scrub starts
Dec 15 10:39:14 compute-0 ceph-mon[74356]: 10.10 scrub ok
Dec 15 10:39:15 compute-0 python3[95445]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:39:15 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v61: 353 pgs: 31 unknown, 322 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:39:15 compute-0 podman[95482]: 2025-12-15 10:39:15.10405132 +0000 UTC m=+0.048262805 container create 17d3ce3f2c1f851df59ffe3a73e40b5a04da9e63be2bd826e37fd13ee5104b45 (image=quay.io/ceph/ceph:v19, name=amazing_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 15 10:39:15 compute-0 systemd[1]: Started libpod-conmon-17d3ce3f2c1f851df59ffe3a73e40b5a04da9e63be2bd826e37fd13ee5104b45.scope.
Dec 15 10:39:15 compute-0 podman[95505]: 2025-12-15 10:39:15.137644847 +0000 UTC m=+0.045537330 container create eb383ee2660aa4da80291a00f1592a5a221d1461ab8a14db8d6bf5db83163774 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-nfs-cephfs-compute-0-gdchmd, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, architecture=x86_64, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, vcs-type=git, version=2.2.4, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=keepalived, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Dec 15 10:39:15 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2feee9f180a6eacddcb6a0f9227cbe789491dbc4ed46c6ca1c24d04fab1d6b11/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2feee9f180a6eacddcb6a0f9227cbe789491dbc4ed46c6ca1c24d04fab1d6b11/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:39:15 compute-0 podman[95482]: 2025-12-15 10:39:15.085076539 +0000 UTC m=+0.029288064 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9068470a08258f29cb208e828cc9db4526c5e285e98e8c918df25d28c5f741e2/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:39:15 compute-0 podman[95482]: 2025-12-15 10:39:15.197837111 +0000 UTC m=+0.142048616 container init 17d3ce3f2c1f851df59ffe3a73e40b5a04da9e63be2bd826e37fd13ee5104b45 (image=quay.io/ceph/ceph:v19, name=amazing_spence, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:39:15 compute-0 podman[95505]: 2025-12-15 10:39:15.2054818 +0000 UTC m=+0.113374323 container init eb383ee2660aa4da80291a00f1592a5a221d1461ab8a14db8d6bf5db83163774 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-nfs-cephfs-compute-0-gdchmd, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, version=2.2.4, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, description=keepalived for Ceph, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Dec 15 10:39:15 compute-0 podman[95482]: 2025-12-15 10:39:15.207705389 +0000 UTC m=+0.151916874 container start 17d3ce3f2c1f851df59ffe3a73e40b5a04da9e63be2bd826e37fd13ee5104b45 (image=quay.io/ceph/ceph:v19, name=amazing_spence, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:39:15 compute-0 podman[95482]: 2025-12-15 10:39:15.210672872 +0000 UTC m=+0.154884467 container attach 17d3ce3f2c1f851df59ffe3a73e40b5a04da9e63be2bd826e37fd13ee5104b45 (image=quay.io/ceph/ceph:v19, name=amazing_spence, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:39:15 compute-0 podman[95505]: 2025-12-15 10:39:15.211464337 +0000 UTC m=+0.119356830 container start eb383ee2660aa4da80291a00f1592a5a221d1461ab8a14db8d6bf5db83163774 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-nfs-cephfs-compute-0-gdchmd, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=2.2.4, release=1793, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, vendor=Red Hat, Inc.)
Dec 15 10:39:15 compute-0 podman[95505]: 2025-12-15 10:39:15.116610541 +0000 UTC m=+0.024503054 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec 15 10:39:15 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:15 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:15 compute-0 bash[95505]: eb383ee2660aa4da80291a00f1592a5a221d1461ab8a14db8d6bf5db83163774
Dec 15 10:39:15 compute-0 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.gdchmd for 77365f67-614e-5a8d-b658-640395550c79.
Dec 15 10:39:15 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-nfs-cephfs-compute-0-gdchmd[95528]: Mon Dec 15 10:39:15 2025: Starting Keepalived v2.2.4 (08/21,2021)
Dec 15 10:39:15 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-nfs-cephfs-compute-0-gdchmd[95528]: Mon Dec 15 10:39:15 2025: Running on Linux 5.14.0-648.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025 (built for Linux 5.14.0)
Dec 15 10:39:15 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-nfs-cephfs-compute-0-gdchmd[95528]: Mon Dec 15 10:39:15 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Dec 15 10:39:15 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-nfs-cephfs-compute-0-gdchmd[95528]: Mon Dec 15 10:39:15 2025: Configuration file /etc/keepalived/keepalived.conf
Dec 15 10:39:15 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-nfs-cephfs-compute-0-gdchmd[95528]: Mon Dec 15 10:39:15 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Dec 15 10:39:15 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-nfs-cephfs-compute-0-gdchmd[95528]: Mon Dec 15 10:39:15 2025: Starting VRRP child process, pid=4
Dec 15 10:39:15 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-nfs-cephfs-compute-0-gdchmd[95528]: Mon Dec 15 10:39:15 2025: Startup complete
Dec 15 10:39:15 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-nfs-cephfs-compute-0-gdchmd[95528]: Mon Dec 15 10:39:15 2025: (VI_0) Entering BACKUP STATE (init)
Dec 15 10:39:15 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-nfs-cephfs-compute-0-gdchmd[95528]: Mon Dec 15 10:39:15 2025: VRRP_Script(check_backend) succeeded
Dec 15 10:39:15 compute-0 amazing_spence[95520]: ERROR: invalid flag --daemon-type
Dec 15 10:39:15 compute-0 systemd[1]: libpod-17d3ce3f2c1f851df59ffe3a73e40b5a04da9e63be2bd826e37fd13ee5104b45.scope: Deactivated successfully.
Dec 15 10:39:15 compute-0 podman[95482]: 2025-12-15 10:39:15.270121364 +0000 UTC m=+0.214332849 container died 17d3ce3f2c1f851df59ffe3a73e40b5a04da9e63be2bd826e37fd13ee5104b45 (image=quay.io/ceph/ceph:v19, name=amazing_spence, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:39:15 compute-0 sudo[95159]: pam_unix(sudo:session): session closed for user root
Dec 15 10:39:15 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:39:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-2feee9f180a6eacddcb6a0f9227cbe789491dbc4ed46c6ca1c24d04fab1d6b11-merged.mount: Deactivated successfully.
Dec 15 10:39:15 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:15 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:39:15 compute-0 podman[95482]: 2025-12-15 10:39:15.316266911 +0000 UTC m=+0.260478406 container remove 17d3ce3f2c1f851df59ffe3a73e40b5a04da9e63be2bd826e37fd13ee5104b45 (image=quay.io/ceph/ceph:v19, name=amazing_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:39:15 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:15 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 15 10:39:15 compute-0 systemd[1]: libpod-conmon-17d3ce3f2c1f851df59ffe3a73e40b5a04da9e63be2bd826e37fd13ee5104b45.scope: Deactivated successfully.
Dec 15 10:39:15 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:15 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 15 10:39:15 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 15 10:39:15 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 15 10:39:15 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 15 10:39:15 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 15 10:39:15 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 15 10:39:15 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.rvhtxo on compute-2
Dec 15 10:39:15 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.rvhtxo on compute-2
Dec 15 10:39:15 compute-0 sudo[95441]: pam_unix(sudo:session): session closed for user root
Dec 15 10:39:15 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Dec 15 10:39:15 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Dec 15 10:39:16 compute-0 ceph-mon[74356]: 9.16 scrub starts
Dec 15 10:39:16 compute-0 ceph-mon[74356]: 10.1f scrub starts
Dec 15 10:39:16 compute-0 ceph-mon[74356]: 9.16 scrub ok
Dec 15 10:39:16 compute-0 ceph-mon[74356]: 10.1f scrub ok
Dec 15 10:39:16 compute-0 ceph-mon[74356]: pgmap v61: 353 pgs: 31 unknown, 322 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:39:16 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:16 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:16 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:16 compute-0 ceph-mon[74356]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 15 10:39:16 compute-0 ceph-mon[74356]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 15 10:39:16 compute-0 ceph-mon[74356]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 15 10:39:16 compute-0 ceph-mon[74356]: Deploying daemon keepalived.nfs.cephfs.compute-2.rvhtxo on compute-2
Dec 15 10:39:16 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:16 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:16 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:39:16 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:16 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e4002290 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:16 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Dec 15 10:39:16 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Dec 15 10:39:17 compute-0 ceph-mon[74356]: 8.2 scrub starts
Dec 15 10:39:17 compute-0 ceph-mon[74356]: 8.2 scrub ok
Dec 15 10:39:17 compute-0 ceph-mon[74356]: 10.1c scrub starts
Dec 15 10:39:17 compute-0 ceph-mon[74356]: 10.1c scrub ok
Dec 15 10:39:17 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v62: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:39:17 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 15 10:39:17 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 15 10:39:17 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 15 10:39:17 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 15 10:39:17 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 15 10:39:17 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 15 10:39:17 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Dec 15 10:39:17 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec 15 10:39:17 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 15 10:39:17 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 15 10:39:17 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:17 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:17 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Dec 15 10:39:17 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Dec 15 10:39:18 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Dec 15 10:39:18 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 15 10:39:18 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 15 10:39:18 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 15 10:39:18 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 15 10:39:18 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 15 10:39:18 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Dec 15 10:39:18 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Dec 15 10:39:18 compute-0 ceph-mon[74356]: 8.17 scrub starts
Dec 15 10:39:18 compute-0 ceph-mon[74356]: 8.17 scrub ok
Dec 15 10:39:18 compute-0 ceph-mon[74356]: 10.1e scrub starts
Dec 15 10:39:18 compute-0 ceph-mon[74356]: 10.1e scrub ok
Dec 15 10:39:18 compute-0 ceph-mon[74356]: pgmap v62: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:39:18 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 15 10:39:18 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 15 10:39:18 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 15 10:39:18 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec 15 10:39:18 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 15 10:39:18 compute-0 ceph-mon[74356]: 10.19 scrub starts
Dec 15 10:39:18 compute-0 ceph-mon[74356]: 10.19 scrub ok
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[12.19( empty local-lis/les=0/0 n=0 ec=61/51 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[12.1c( empty local-lis/les=0/0 n=0 ec=61/51 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[12.8( empty local-lis/les=0/0 n=0 ec=61/51 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[12.a( empty local-lis/les=0/0 n=0 ec=61/51 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[12.e( empty local-lis/les=0/0 n=0 ec=61/51 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[12.c( empty local-lis/les=0/0 n=0 ec=61/51 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[12.b( empty local-lis/les=0/0 n=0 ec=61/51 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[12.6( empty local-lis/les=0/0 n=0 ec=61/51 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[12.12( empty local-lis/les=0/0 n=0 ec=61/51 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[12.10( empty local-lis/les=0/0 n=0 ec=61/51 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.17( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.700659752s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 active pruub 166.448486328s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.17( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.700630188s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.448486328s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.14( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.394456863s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 active pruub 172.142562866s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.14( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.394412994s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.142562866s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.15( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.400116920s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 active pruub 172.148620605s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.15( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.389884949s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 active pruub 172.138381958s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.15( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.400103569s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.148620605s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.15( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.389832497s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.138381958s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.16( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.705865860s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 active pruub 166.454544067s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.16( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.705843925s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.454544067s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.16( v 54'45 (0'0,54'45] local-lis/les=57/58 n=2 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.393853188s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 active pruub 172.142608643s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.16( v 54'45 (0'0,54'45] local-lis/les=57/58 n=2 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.393836975s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.142608643s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.17( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.393719673s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 active pruub 172.142547607s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.17( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.393686295s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.142547607s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.14( v 62'51 (0'0,62'51] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.705628395s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=60'49 lcod 60'50 mlcod 60'50 active pruub 166.454574585s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.14( v 62'51 (0'0,62'51] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.705584526s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=60'49 lcod 60'50 mlcod 0'0 unknown NOTIFY pruub 166.454574585s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.16( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.393647194s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 active pruub 172.142745972s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.17( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.393143654s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 active pruub 172.142593384s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.17( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.393121719s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.142593384s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.10( v 59'48 (0'0,59'48] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.393620491s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=58'46 lcod 58'47 mlcod 58'47 active pruub 172.143203735s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.11( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.393085480s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 active pruub 172.142654419s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.13( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.704974174s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 active pruub 166.454559326s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.10( v 59'48 (0'0,59'48] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.393589020s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=58'46 lcod 58'47 mlcod 0'0 unknown NOTIFY pruub 172.143203735s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.11( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.393058777s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.142654419s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.13( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.704950333s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.454559326s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.12( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.704890251s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 active pruub 166.454589844s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.12( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.704877853s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.454589844s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.10( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392987251s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 active pruub 172.142761230s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.16( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.393623352s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.142745972s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.10( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392967224s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.142761230s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.11( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392936707s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 active pruub 172.142761230s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.1( v 50'48 (0'0,50'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.705032349s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 active pruub 166.454864502s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.1( v 50'48 (0'0,50'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.705019951s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.454864502s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.11( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392902374s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.142761230s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.3( v 46'6 (0'0,46'6] local-lis/les=57/58 n=1 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.393102646s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 active pruub 172.143249512s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.3( v 46'6 (0'0,46'6] local-lis/les=57/58 n=1 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.393081665s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.143249512s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.3( v 54'45 (0'0,54'45] local-lis/les=57/58 n=1 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.393096924s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 active pruub 172.143280029s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.3( v 54'45 (0'0,54'45] local-lis/les=57/58 n=1 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.393074989s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.143280029s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.f( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.393026352s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 active pruub 172.143280029s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.f( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.393000603s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.143280029s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.9( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.393009186s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 active pruub 172.143402100s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.9( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392990112s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.143402100s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.8( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392874718s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 active pruub 172.143417358s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.8( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392854691s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.143417358s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.a( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.704281807s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 active pruub 166.454849243s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.a( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.704265594s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.454849243s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.8( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392975807s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 active pruub 172.143600464s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.9( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392840385s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 active pruub 172.143554688s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.8( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392929077s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.143600464s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.9( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392818451s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.143554688s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.b( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392702103s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 active pruub 172.143493652s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.b( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392680168s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.143493652s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.2( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.391842842s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 active pruub 172.142776489s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.a( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392668724s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 active pruub 172.143692017s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.f( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392615318s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 active pruub 172.143630981s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.2( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.391819954s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.142776489s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.a( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392647743s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.143692017s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.e( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392374039s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 active pruub 172.143432617s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.f( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392563820s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.143630981s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.e( v 62'51 (0'0,62'51] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.703950882s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=60'49 lcod 60'50 mlcod 60'50 active pruub 166.455108643s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.d( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392513275s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 active pruub 172.143737793s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.e( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392338753s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.143432617s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.d( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392495155s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.143737793s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.f( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.703687668s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 active pruub 166.454986572s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.e( v 62'51 (0'0,62'51] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.703880310s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=60'49 lcod 60'50 mlcod 0'0 unknown NOTIFY pruub 166.455108643s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.f( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.703667641s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.454986572s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.d( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392305374s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 active pruub 172.143707275s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.d( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392283440s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.143707275s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.c( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392323494s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 active pruub 172.143890381s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.8( v 50'48 (0'0,50'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.703432083s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 active pruub 166.455047607s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.a( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392165184s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 active pruub 172.143783569s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.a( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392145157s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.143783569s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.b( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392172813s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 active pruub 172.143814087s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.c( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392270088s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.143890381s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.b( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.392133713s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.143814087s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.8( v 50'48 (0'0,50'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.703414917s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.455047607s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.3( v 62'51 (0'0,62'51] local-lis/les=59/60 n=1 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.703234673s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=60'49 lcod 60'50 mlcod 60'50 active pruub 166.455062866s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.3( v 62'51 (0'0,62'51] local-lis/les=59/60 n=1 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.703189850s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=60'49 lcod 60'50 mlcod 0'0 unknown NOTIFY pruub 166.455062866s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.4( v 50'48 (0'0,50'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.703146935s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 active pruub 166.455078125s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.5( v 50'48 (0'0,50'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.703123093s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 active pruub 166.455093384s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.6( v 46'6 (0'0,46'6] local-lis/les=57/58 n=1 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.391981125s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 active pruub 172.143905640s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.6( v 46'6 (0'0,46'6] local-lis/les=57/58 n=1 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.391895294s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.143905640s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.5( v 50'48 (0'0,50'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.703104973s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.455093384s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.4( v 50'48 (0'0,50'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.703109741s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.455078125s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.6( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.391894341s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 active pruub 172.143951416s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.6( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.391879082s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.143951416s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.5( v 54'45 (0'0,54'45] local-lis/les=57/58 n=1 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.395148277s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 active pruub 172.147399902s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.7( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.391761780s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 active pruub 172.143997192s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.5( v 46'6 (0'0,46'6] local-lis/les=57/58 n=1 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.394969940s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 active pruub 172.147232056s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.4( v 54'45 (0'0,54'45] local-lis/les=57/58 n=1 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.395008087s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 active pruub 172.147277832s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.5( v 46'6 (0'0,46'6] local-lis/les=57/58 n=1 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.394952774s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.147232056s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.5( v 54'45 (0'0,54'45] local-lis/les=57/58 n=1 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.395100594s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.147399902s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.4( v 54'45 (0'0,54'45] local-lis/les=57/58 n=1 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.394989014s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.147277832s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.7( v 50'48 (0'0,50'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.702816963s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 active pruub 166.455139160s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.7( v 50'48 (0'0,50'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.702704430s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.455139160s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.1b( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.394951820s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 active pruub 172.147460938s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.7( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.391745567s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.143997192s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.1b( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.394882202s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.147460938s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.19( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.702536583s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 active pruub 166.455169678s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.18( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.394749641s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 active pruub 172.147460938s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.18( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.394735336s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.147460938s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.19( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.702459335s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.455169678s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.1a( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.702451706s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 active pruub 166.455184937s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.1a( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.702426910s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.455184937s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.19( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.394770622s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 active pruub 172.147552490s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.1b( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.702307701s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 active pruub 166.455169678s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.1b( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.702289581s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.455169678s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.19( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.394722939s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.147552490s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.18( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.394644737s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 active pruub 172.147567749s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.1c( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.702302933s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 active pruub 166.455276489s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.18( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.394620895s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.147567749s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.1c( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.702283859s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.455276489s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.1f( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.394746780s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 active pruub 172.147903442s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.1d( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.701997757s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 active pruub 166.455184937s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.1d( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.701979637s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.455184937s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.1e( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.701946259s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 active pruub 166.455200195s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.1f( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.394731522s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.147903442s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[11.1e( v 50'48 (0'0,50'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.701912880s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=50'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.455200195s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.1c( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.394888878s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 active pruub 172.148208618s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.1c( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.394870758s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.148208618s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.1d( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.394892693s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 active pruub 172.148269653s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.1d( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.394869804s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.148269653s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.12( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.394926071s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 active pruub 172.148422241s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.12( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.394904137s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.148422241s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.12( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.394712448s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 active pruub 172.148422241s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.13( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.394756317s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 active pruub 172.148483276s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[9.13( v 46'6 (0'0,46'6] local-lis/les=57/58 n=0 ec=57/45 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.394735336s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=46'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.148483276s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 63 pg[8.12( v 54'45 (0'0,54'45] local-lis/les=57/58 n=0 ec=57/41 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=14.394681931s) [1] r=-1 lpr=63 pi=[57,63)/1 crt=54'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.148422241s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:18 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:18 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c40016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:18 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Dec 15 10:39:18 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Dec 15 10:39:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-nfs-cephfs-compute-0-gdchmd[95528]: Mon Dec 15 10:39:18 2025: (VI_0) Entering MASTER STATE
Dec 15 10:39:19 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v64: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:39:19 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Dec 15 10:39:19 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec 15 10:39:19 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Dec 15 10:39:19 compute-0 ceph-mon[74356]: 8.11 scrub starts
Dec 15 10:39:19 compute-0 ceph-mon[74356]: 8.11 scrub ok
Dec 15 10:39:19 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 15 10:39:19 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 15 10:39:19 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 15 10:39:19 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 15 10:39:19 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 15 10:39:19 compute-0 ceph-mon[74356]: osdmap e63: 3 total, 3 up, 3 in
Dec 15 10:39:19 compute-0 ceph-mon[74356]: 10.16 scrub starts
Dec 15 10:39:19 compute-0 ceph-mon[74356]: 10.16 scrub ok
Dec 15 10:39:19 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec 15 10:39:19 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 15 10:39:19 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Dec 15 10:39:19 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Dec 15 10:39:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 64 pg[12.10( empty local-lis/les=63/64 n=0 ec=61/51 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 64 pg[12.12( empty local-lis/les=63/64 n=0 ec=61/51 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 64 pg[12.b( empty local-lis/les=63/64 n=0 ec=61/51 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 64 pg[12.6( empty local-lis/les=63/64 n=0 ec=61/51 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 64 pg[12.c( empty local-lis/les=63/64 n=0 ec=61/51 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 64 pg[12.e( empty local-lis/les=63/64 n=0 ec=61/51 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 64 pg[12.a( empty local-lis/les=63/64 n=0 ec=61/51 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 64 pg[12.1c( empty local-lis/les=63/64 n=0 ec=61/51 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 64 pg[12.19( empty local-lis/les=63/64 n=0 ec=61/51 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 64 pg[10.16( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [0] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 64 pg[10.2( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [0] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 64 pg[12.8( empty local-lis/les=63/64 n=0 ec=61/51 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 64 pg[10.e( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [0] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 64 pg[10.a( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [0] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 64 pg[10.6( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [0] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 64 pg[10.1a( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [0] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 64 pg[10.1e( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [0] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:19 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 64 pg[10.12( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [0] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:19 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:19 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e4002290 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:19 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.2 deep-scrub starts
Dec 15 10:39:19 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.2 deep-scrub ok
Dec 15 10:39:20 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Dec 15 10:39:20 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Dec 15 10:39:20 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Dec 15 10:39:20 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 65 pg[10.16( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:20 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 65 pg[10.16( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:20 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 65 pg[10.12( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:20 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 65 pg[10.12( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:20 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 65 pg[10.a( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:20 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 65 pg[10.a( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:20 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 65 pg[10.e( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:20 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 65 pg[10.e( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:20 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 65 pg[10.2( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:20 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 65 pg[10.2( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:20 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 65 pg[10.6( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:20 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 65 pg[10.6( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:20 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 65 pg[10.1a( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:20 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 65 pg[10.1a( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:20 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 65 pg[10.1e( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:20 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 65 pg[10.1e( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:20 compute-0 ceph-mon[74356]: 11.15 scrub starts
Dec 15 10:39:20 compute-0 ceph-mon[74356]: 11.15 scrub ok
Dec 15 10:39:20 compute-0 ceph-mon[74356]: pgmap v64: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:39:20 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 15 10:39:20 compute-0 ceph-mon[74356]: osdmap e64: 3 total, 3 up, 3 in
Dec 15 10:39:20 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:20 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e4002290 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:20 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:20 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:20 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Dec 15 10:39:20 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Dec 15 10:39:20 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event 6f492ce7-664b-4350-9be3-902dd3d7ba58 (Global Recovery Event) in 10 seconds
Dec 15 10:39:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:39:21 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:39:21 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 15 10:39:21 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v67: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:39:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Dec 15 10:39:21 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec 15 10:39:21 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:21 compute-0 ceph-mgr[74651]: [progress INFO root] complete: finished ev b9d61420-ae7a-406d-b6e2-457dede4f51c (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Dec 15 10:39:21 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event b9d61420-ae7a-406d-b6e2-457dede4f51c (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 31 seconds
Dec 15 10:39:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 15 10:39:21 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:21 compute-0 ceph-mgr[74651]: [progress INFO root] update: starting ev 3f5062f2-e5c9-4a00-8e21-b9c7d588f625 (Updating alertmanager deployment (+1 -> 1))
Dec 15 10:39:21 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Dec 15 10:39:21 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Dec 15 10:39:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Dec 15 10:39:21 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 15 10:39:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Dec 15 10:39:21 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Dec 15 10:39:21 compute-0 ceph-mon[74356]: 9.2 deep-scrub starts
Dec 15 10:39:21 compute-0 ceph-mon[74356]: 9.2 deep-scrub ok
Dec 15 10:39:21 compute-0 ceph-mon[74356]: osdmap e65: 3 total, 3 up, 3 in
Dec 15 10:39:21 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:21 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:21 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec 15 10:39:21 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:21 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:21 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 15 10:39:21 compute-0 ceph-mon[74356]: osdmap e66: 3 total, 3 up, 3 in
Dec 15 10:39:21 compute-0 sudo[95566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:39:21 compute-0 sudo[95566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:39:21 compute-0 sudo[95566]: pam_unix(sudo:session): session closed for user root
Dec 15 10:39:21 compute-0 sudo[95591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:39:21 compute-0 sudo[95591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:39:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:21 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e4002290 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:39:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Dec 15 10:39:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Dec 15 10:39:21 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Dec 15 10:39:21 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 67 pg[10.1e( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=5 ec=59/47 lis/c=65/59 les/c/f=66/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=54'1067 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:21 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 67 pg[10.1e( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=5 ec=59/47 lis/c=65/59 les/c/f=66/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=54'1067 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:21 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 67 pg[10.6( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=6 ec=59/47 lis/c=65/59 les/c/f=66/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=54'1067 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:21 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 67 pg[10.6( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=6 ec=59/47 lis/c=65/59 les/c/f=66/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=54'1067 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:21 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 11.c scrub starts
Dec 15 10:39:21 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 11.c scrub ok
Dec 15 10:39:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:22 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c40016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:22 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Dec 15 10:39:22 compute-0 ceph-mon[74356]: 11.0 scrub starts
Dec 15 10:39:22 compute-0 ceph-mon[74356]: 11.0 scrub ok
Dec 15 10:39:22 compute-0 ceph-mon[74356]: pgmap v67: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:39:22 compute-0 ceph-mon[74356]: Deploying daemon alertmanager.compute-0 on compute-0
Dec 15 10:39:22 compute-0 ceph-mon[74356]: osdmap e67: 3 total, 3 up, 3 in
Dec 15 10:39:22 compute-0 ceph-mon[74356]: 10.14 scrub starts
Dec 15 10:39:22 compute-0 ceph-mon[74356]: 10.1d scrub starts
Dec 15 10:39:22 compute-0 ceph-mon[74356]: 10.1d scrub ok
Dec 15 10:39:22 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Dec 15 10:39:22 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Dec 15 10:39:22 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 68 pg[10.16( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=4 ec=59/47 lis/c=65/59 les/c/f=66/60/0 sis=68) [0] r=0 lpr=68 pi=[59,68)/1 luod=0'0 crt=54'1067 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:22 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 68 pg[10.16( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=4 ec=59/47 lis/c=65/59 les/c/f=66/60/0 sis=68) [0] r=0 lpr=68 pi=[59,68)/1 crt=54'1067 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:22 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 68 pg[10.a( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=6 ec=59/47 lis/c=65/59 les/c/f=66/60/0 sis=68) [0] r=0 lpr=68 pi=[59,68)/1 luod=0'0 crt=54'1067 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:22 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 68 pg[10.a( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=6 ec=59/47 lis/c=65/59 les/c/f=66/60/0 sis=68) [0] r=0 lpr=68 pi=[59,68)/1 crt=54'1067 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:22 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 68 pg[10.2( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=6 ec=59/47 lis/c=65/59 les/c/f=66/60/0 sis=68) [0] r=0 lpr=68 pi=[59,68)/1 luod=0'0 crt=54'1067 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:22 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 68 pg[10.2( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=6 ec=59/47 lis/c=65/59 les/c/f=66/60/0 sis=68) [0] r=0 lpr=68 pi=[59,68)/1 crt=54'1067 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:22 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 68 pg[10.e( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=6 ec=59/47 lis/c=65/59 les/c/f=66/60/0 sis=68) [0] r=0 lpr=68 pi=[59,68)/1 luod=0'0 crt=54'1067 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:22 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 68 pg[10.e( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=6 ec=59/47 lis/c=65/59 les/c/f=66/60/0 sis=68) [0] r=0 lpr=68 pi=[59,68)/1 crt=54'1067 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:22 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 68 pg[10.12( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=4 ec=59/47 lis/c=65/59 les/c/f=66/60/0 sis=68) [0] r=0 lpr=68 pi=[59,68)/1 luod=0'0 crt=54'1067 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:22 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 68 pg[10.12( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=4 ec=59/47 lis/c=65/59 les/c/f=66/60/0 sis=68) [0] r=0 lpr=68 pi=[59,68)/1 crt=54'1067 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:22 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 68 pg[10.1a( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=5 ec=59/47 lis/c=65/59 les/c/f=66/60/0 sis=68) [0] r=0 lpr=68 pi=[59,68)/1 luod=0'0 crt=54'1067 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:22 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 68 pg[10.1a( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=5 ec=59/47 lis/c=65/59 les/c/f=66/60/0 sis=68) [0] r=0 lpr=68 pi=[59,68)/1 crt=54'1067 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:22 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 68 pg[10.6( v 54'1067 (0'0,54'1067] local-lis/les=67/68 n=6 ec=59/47 lis/c=65/59 les/c/f=66/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=54'1067 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:22 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 68 pg[10.1e( v 54'1067 (0'0,54'1067] local-lis/les=67/68 n=5 ec=59/47 lis/c=65/59 les/c/f=66/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=54'1067 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:22 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc0039c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:22 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 11.b scrub starts
Dec 15 10:39:22 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 11.b scrub ok
Dec 15 10:39:23 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v71: 353 pgs: 8 remapped+peering, 345 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 799 B/s, 24 objects/s recovering
Dec 15 10:39:23 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:23 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:23 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Dec 15 10:39:23 compute-0 ceph-mon[74356]: 11.c scrub starts
Dec 15 10:39:23 compute-0 ceph-mon[74356]: 11.c scrub ok
Dec 15 10:39:23 compute-0 ceph-mon[74356]: osdmap e68: 3 total, 3 up, 3 in
Dec 15 10:39:23 compute-0 ceph-mon[74356]: 10.14 scrub ok
Dec 15 10:39:23 compute-0 ceph-mon[74356]: 10.0 deep-scrub starts
Dec 15 10:39:23 compute-0 ceph-mon[74356]: 10.0 deep-scrub ok
Dec 15 10:39:23 compute-0 ceph-mon[74356]: 10.15 deep-scrub starts
Dec 15 10:39:23 compute-0 ceph-mon[74356]: 10.15 deep-scrub ok
Dec 15 10:39:23 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Dec 15 10:39:23 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Dec 15 10:39:23 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 69 pg[10.16( v 54'1067 (0'0,54'1067] local-lis/les=68/69 n=4 ec=59/47 lis/c=65/59 les/c/f=66/60/0 sis=68) [0] r=0 lpr=68 pi=[59,68)/1 crt=54'1067 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:23 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 69 pg[10.e( v 54'1067 (0'0,54'1067] local-lis/les=68/69 n=6 ec=59/47 lis/c=65/59 les/c/f=66/60/0 sis=68) [0] r=0 lpr=68 pi=[59,68)/1 crt=54'1067 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:23 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 69 pg[10.a( v 54'1067 (0'0,54'1067] local-lis/les=68/69 n=6 ec=59/47 lis/c=65/59 les/c/f=66/60/0 sis=68) [0] r=0 lpr=68 pi=[59,68)/1 crt=54'1067 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:23 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 69 pg[10.2( v 54'1067 (0'0,54'1067] local-lis/les=68/69 n=6 ec=59/47 lis/c=65/59 les/c/f=66/60/0 sis=68) [0] r=0 lpr=68 pi=[59,68)/1 crt=54'1067 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:23 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 69 pg[10.12( v 54'1067 (0'0,54'1067] local-lis/les=68/69 n=4 ec=59/47 lis/c=65/59 les/c/f=66/60/0 sis=68) [0] r=0 lpr=68 pi=[59,68)/1 crt=54'1067 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:23 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 69 pg[10.1a( v 54'1067 (0'0,54'1067] local-lis/les=68/69 n=5 ec=59/47 lis/c=65/59 les/c/f=66/60/0 sis=68) [0] r=0 lpr=68 pi=[59,68)/1 crt=54'1067 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:23 compute-0 podman[95657]: 2025-12-15 10:39:23.404936647 +0000 UTC m=+1.825467704 volume create 3f91c3cb8da7e842ea82be2a9df92f807a5e44e9505fe967f8f34fd41e6d60b8
Dec 15 10:39:23 compute-0 podman[95657]: 2025-12-15 10:39:23.390169487 +0000 UTC m=+1.810700574 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec 15 10:39:23 compute-0 podman[95657]: 2025-12-15 10:39:23.415395723 +0000 UTC m=+1.835926780 container create 766e89b263bb7d5ac870a33cbbfe931f2b27a96e900f0f1f7558d05388b4e070 (image=quay.io/prometheus/alertmanager:v0.25.0, name=recursing_beaver, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:23 compute-0 systemd[1]: Started libpod-conmon-766e89b263bb7d5ac870a33cbbfe931f2b27a96e900f0f1f7558d05388b4e070.scope.
Dec 15 10:39:23 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:39:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fdb669f7362c38f62c3b914911bbf1d1474adaf6c1f05573ce67b74cb4568ff/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 15 10:39:23 compute-0 podman[95657]: 2025-12-15 10:39:23.48913142 +0000 UTC m=+1.909662497 container init 766e89b263bb7d5ac870a33cbbfe931f2b27a96e900f0f1f7558d05388b4e070 (image=quay.io/prometheus/alertmanager:v0.25.0, name=recursing_beaver, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:23 compute-0 podman[95657]: 2025-12-15 10:39:23.495501558 +0000 UTC m=+1.916032625 container start 766e89b263bb7d5ac870a33cbbfe931f2b27a96e900f0f1f7558d05388b4e070 (image=quay.io/prometheus/alertmanager:v0.25.0, name=recursing_beaver, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:23 compute-0 recursing_beaver[95792]: 65534 65534
Dec 15 10:39:23 compute-0 systemd[1]: libpod-766e89b263bb7d5ac870a33cbbfe931f2b27a96e900f0f1f7558d05388b4e070.scope: Deactivated successfully.
Dec 15 10:39:23 compute-0 podman[95657]: 2025-12-15 10:39:23.499379169 +0000 UTC m=+1.919910226 container attach 766e89b263bb7d5ac870a33cbbfe931f2b27a96e900f0f1f7558d05388b4e070 (image=quay.io/prometheus/alertmanager:v0.25.0, name=recursing_beaver, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:23 compute-0 podman[95657]: 2025-12-15 10:39:23.499744031 +0000 UTC m=+1.920275088 container died 766e89b263bb7d5ac870a33cbbfe931f2b27a96e900f0f1f7558d05388b4e070 (image=quay.io/prometheus/alertmanager:v0.25.0, name=recursing_beaver, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fdb669f7362c38f62c3b914911bbf1d1474adaf6c1f05573ce67b74cb4568ff-merged.mount: Deactivated successfully.
Dec 15 10:39:23 compute-0 podman[95657]: 2025-12-15 10:39:23.543743651 +0000 UTC m=+1.964274708 container remove 766e89b263bb7d5ac870a33cbbfe931f2b27a96e900f0f1f7558d05388b4e070 (image=quay.io/prometheus/alertmanager:v0.25.0, name=recursing_beaver, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:23 compute-0 podman[95657]: 2025-12-15 10:39:23.548745237 +0000 UTC m=+1.969276314 volume remove 3f91c3cb8da7e842ea82be2a9df92f807a5e44e9505fe967f8f34fd41e6d60b8
Dec 15 10:39:23 compute-0 systemd[1]: libpod-conmon-766e89b263bb7d5ac870a33cbbfe931f2b27a96e900f0f1f7558d05388b4e070.scope: Deactivated successfully.
Dec 15 10:39:23 compute-0 podman[95808]: 2025-12-15 10:39:23.605911418 +0000 UTC m=+0.037477099 volume create 07d6430b13ed8219ca9416a7c80b37e9c77a882feb761b1738e01d81dffdb21c
Dec 15 10:39:23 compute-0 podman[95808]: 2025-12-15 10:39:23.613239016 +0000 UTC m=+0.044804697 container create 2c3328120577d601dcb147f2d8d002085d7c004293582b8cbf480cdfcfd5aae5 (image=quay.io/prometheus/alertmanager:v0.25.0, name=jolly_wu, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:23 compute-0 systemd[1]: Started libpod-conmon-2c3328120577d601dcb147f2d8d002085d7c004293582b8cbf480cdfcfd5aae5.scope.
Dec 15 10:39:23 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:39:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c729999d7c434c0875d56511d4448b365d4b243f6d1cc4e3db733ec0def31bde/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 15 10:39:23 compute-0 podman[95808]: 2025-12-15 10:39:23.590352414 +0000 UTC m=+0.021918125 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec 15 10:39:23 compute-0 podman[95808]: 2025-12-15 10:39:23.68686031 +0000 UTC m=+0.118425991 container init 2c3328120577d601dcb147f2d8d002085d7c004293582b8cbf480cdfcfd5aae5 (image=quay.io/prometheus/alertmanager:v0.25.0, name=jolly_wu, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:23 compute-0 podman[95808]: 2025-12-15 10:39:23.69326068 +0000 UTC m=+0.124826361 container start 2c3328120577d601dcb147f2d8d002085d7c004293582b8cbf480cdfcfd5aae5 (image=quay.io/prometheus/alertmanager:v0.25.0, name=jolly_wu, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:23 compute-0 jolly_wu[95824]: 65534 65534
Dec 15 10:39:23 compute-0 podman[95808]: 2025-12-15 10:39:23.697783321 +0000 UTC m=+0.129349002 container attach 2c3328120577d601dcb147f2d8d002085d7c004293582b8cbf480cdfcfd5aae5 (image=quay.io/prometheus/alertmanager:v0.25.0, name=jolly_wu, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:23 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Dec 15 10:39:23 compute-0 systemd[1]: libpod-2c3328120577d601dcb147f2d8d002085d7c004293582b8cbf480cdfcfd5aae5.scope: Deactivated successfully.
Dec 15 10:39:23 compute-0 podman[95808]: 2025-12-15 10:39:23.719041114 +0000 UTC m=+0.150606805 container died 2c3328120577d601dcb147f2d8d002085d7c004293582b8cbf480cdfcfd5aae5 (image=quay.io/prometheus/alertmanager:v0.25.0, name=jolly_wu, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:23 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Dec 15 10:39:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c729999d7c434c0875d56511d4448b365d4b243f6d1cc4e3db733ec0def31bde-merged.mount: Deactivated successfully.
Dec 15 10:39:23 compute-0 podman[95808]: 2025-12-15 10:39:23.755904231 +0000 UTC m=+0.187469912 container remove 2c3328120577d601dcb147f2d8d002085d7c004293582b8cbf480cdfcfd5aae5 (image=quay.io/prometheus/alertmanager:v0.25.0, name=jolly_wu, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:23 compute-0 podman[95808]: 2025-12-15 10:39:23.758503383 +0000 UTC m=+0.190069064 volume remove 07d6430b13ed8219ca9416a7c80b37e9c77a882feb761b1738e01d81dffdb21c
Dec 15 10:39:23 compute-0 systemd[1]: libpod-conmon-2c3328120577d601dcb147f2d8d002085d7c004293582b8cbf480cdfcfd5aae5.scope: Deactivated successfully.
Dec 15 10:39:23 compute-0 systemd[1]: Reloading.
Dec 15 10:39:23 compute-0 systemd-rc-local-generator[95868]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:39:23 compute-0 systemd-sysv-generator[95871]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:39:24 compute-0 systemd[1]: Reloading.
Dec 15 10:39:24 compute-0 systemd-rc-local-generator[95908]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:39:24 compute-0 systemd-sysv-generator[95911]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:39:24 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:24 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e4002290 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:24 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for 77365f67-614e-5a8d-b658-640395550c79...
Dec 15 10:39:24 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:24 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c40016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:24 compute-0 ceph-mon[74356]: 11.b scrub starts
Dec 15 10:39:24 compute-0 ceph-mon[74356]: 11.b scrub ok
Dec 15 10:39:24 compute-0 ceph-mon[74356]: pgmap v71: 353 pgs: 8 remapped+peering, 345 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 799 B/s, 24 objects/s recovering
Dec 15 10:39:24 compute-0 ceph-mon[74356]: osdmap e69: 3 total, 3 up, 3 in
Dec 15 10:39:24 compute-0 ceph-mon[74356]: 10.c scrub starts
Dec 15 10:39:24 compute-0 ceph-mon[74356]: 10.c scrub ok
Dec 15 10:39:24 compute-0 ceph-mon[74356]: 10.13 scrub starts
Dec 15 10:39:24 compute-0 ceph-mon[74356]: 10.13 scrub ok
Dec 15 10:39:24 compute-0 podman[95969]: 2025-12-15 10:39:24.514072993 +0000 UTC m=+0.033170295 volume create e2dba4d19103cd02742ca3e5c71c8a51018ce58965edfd5db577cbc2a9d2132e
Dec 15 10:39:24 compute-0 podman[95969]: 2025-12-15 10:39:24.522037101 +0000 UTC m=+0.041134393 container create 73244696e42fd889cd12e816ba79afefa66014c2436355b7b29fda12a488ec50 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2488e520edebdf17f614aad49dc9b8302542ba4f6e68f6bc26fc0bd6279d00/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 15 10:39:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2488e520edebdf17f614aad49dc9b8302542ba4f6e68f6bc26fc0bd6279d00/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 15 10:39:24 compute-0 podman[95969]: 2025-12-15 10:39:24.57013197 +0000 UTC m=+0.089229292 container init 73244696e42fd889cd12e816ba79afefa66014c2436355b7b29fda12a488ec50 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:24 compute-0 podman[95969]: 2025-12-15 10:39:24.574826826 +0000 UTC m=+0.093924128 container start 73244696e42fd889cd12e816ba79afefa66014c2436355b7b29fda12a488ec50 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:24 compute-0 bash[95969]: 73244696e42fd889cd12e816ba79afefa66014c2436355b7b29fda12a488ec50
Dec 15 10:39:24 compute-0 podman[95969]: 2025-12-15 10:39:24.501995407 +0000 UTC m=+0.021092729 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec 15 10:39:24 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for 77365f67-614e-5a8d-b658-640395550c79.
Dec 15 10:39:24 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0[95984]: ts=2025-12-15T10:39:24.598Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Dec 15 10:39:24 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0[95984]: ts=2025-12-15T10:39:24.598Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Dec 15 10:39:24 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0[95984]: ts=2025-12-15T10:39:24.605Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Dec 15 10:39:24 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0[95984]: ts=2025-12-15T10:39:24.607Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Dec 15 10:39:24 compute-0 sudo[95591]: pam_unix(sudo:session): session closed for user root
Dec 15 10:39:24 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:39:24 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:24 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0[95984]: ts=2025-12-15T10:39:24.642Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Dec 15 10:39:24 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:39:24 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0[95984]: ts=2025-12-15T10:39:24.642Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Dec 15 10:39:24 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-nfs-cephfs-compute-0-gdchmd[95528]: Mon Dec 15 10:39:24 2025: (VI_0) Received advert from 192.168.122.102 with lower priority 90, ours 100, forcing new election
Dec 15 10:39:24 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0[95984]: ts=2025-12-15T10:39:24.647Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Dec 15 10:39:24 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0[95984]: ts=2025-12-15T10:39:24.647Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Dec 15 10:39:24 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:24 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec 15 10:39:24 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:24 compute-0 ceph-mgr[74651]: [progress INFO root] complete: finished ev 3f5062f2-e5c9-4a00-8e21-b9c7d588f625 (Updating alertmanager deployment (+1 -> 1))
Dec 15 10:39:24 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event 3f5062f2-e5c9-4a00-8e21-b9c7d588f625 (Updating alertmanager deployment (+1 -> 1)) in 4 seconds
Dec 15 10:39:24 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec 15 10:39:24 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:24 compute-0 ceph-mgr[74651]: [progress INFO root] update: starting ev 4f834d59-8431-4f9b-b3d8-1e57c8dde683 (Updating grafana deployment (+1 -> 1))
Dec 15 10:39:24 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Dec 15 10:39:24 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Dec 15 10:39:24 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 11.d scrub starts
Dec 15 10:39:24 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 11.d scrub ok
Dec 15 10:39:24 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Dec 15 10:39:24 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:24 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Dec 15 10:39:24 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:24 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Dec 15 10:39:24 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec 15 10:39:24 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec 15 10:39:24 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Dec 15 10:39:24 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:24 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Dec 15 10:39:24 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Dec 15 10:39:24 compute-0 sudo[96005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:39:24 compute-0 sudo[96005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:39:24 compute-0 sudo[96005]: pam_unix(sudo:session): session closed for user root
Dec 15 10:39:24 compute-0 sudo[96030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:39:24 compute-0 sudo[96030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:39:25 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v73: 353 pgs: 8 remapped+peering, 345 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 788 B/s, 23 objects/s recovering
Dec 15 10:39:25 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:25 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc0039c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:25 compute-0 sudo[96119]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raqhkuraybngphwtnolxswtfmxmnhaib ; /usr/bin/python3'
Dec 15 10:39:25 compute-0 sudo[96119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:39:25 compute-0 ceph-mon[74356]: 11.9 scrub starts
Dec 15 10:39:25 compute-0 ceph-mon[74356]: 11.9 scrub ok
Dec 15 10:39:25 compute-0 ceph-mon[74356]: 10.8 scrub starts
Dec 15 10:39:25 compute-0 ceph-mon[74356]: 10.8 scrub ok
Dec 15 10:39:25 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:25 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:25 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:25 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:25 compute-0 ceph-mon[74356]: Regenerating cephadm self-signed grafana TLS certificates
Dec 15 10:39:25 compute-0 ceph-mon[74356]: 11.d scrub starts
Dec 15 10:39:25 compute-0 ceph-mon[74356]: 10.1b scrub starts
Dec 15 10:39:25 compute-0 ceph-mon[74356]: 11.d scrub ok
Dec 15 10:39:25 compute-0 ceph-mon[74356]: 10.1b scrub ok
Dec 15 10:39:25 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:25 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:25 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec 15 10:39:25 compute-0 ceph-mon[74356]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec 15 10:39:25 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:25 compute-0 ceph-mon[74356]: Deploying daemon grafana.compute-0 on compute-0
Dec 15 10:39:25 compute-0 python3[96128]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:39:25 compute-0 podman[96137]: 2025-12-15 10:39:25.706869445 +0000 UTC m=+0.111752363 container create 2c62fe1fc4b0b0e3b6b3ad91d9e42fb80d53211dce327f9e3516f56ac2c98b61 (image=quay.io/ceph/ceph:v19, name=wizardly_cartwright, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 15 10:39:25 compute-0 podman[96137]: 2025-12-15 10:39:25.618232523 +0000 UTC m=+0.023115471 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:39:25 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 8.e scrub starts
Dec 15 10:39:25 compute-0 ceph-mgr[74651]: [progress INFO root] Writing back 24 completed events
Dec 15 10:39:25 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 15 10:39:25 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 8.e scrub ok
Dec 15 10:39:25 compute-0 systemd[1]: Started libpod-conmon-2c62fe1fc4b0b0e3b6b3ad91d9e42fb80d53211dce327f9e3516f56ac2c98b61.scope.
Dec 15 10:39:25 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:39:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee70fa102e12b6fbddf8333e30ae50bdcc79b1340e4776f7bbaa667a4376bcc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:39:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee70fa102e12b6fbddf8333e30ae50bdcc79b1340e4776f7bbaa667a4376bcc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:39:26 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:26 compute-0 podman[96137]: 2025-12-15 10:39:26.132468845 +0000 UTC m=+0.537351793 container init 2c62fe1fc4b0b0e3b6b3ad91d9e42fb80d53211dce327f9e3516f56ac2c98b61 (image=quay.io/ceph/ceph:v19, name=wizardly_cartwright, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 15 10:39:26 compute-0 ceph-mgr[74651]: [progress WARNING root] Starting Global Recovery Event,8 pgs not in active + clean state
Dec 15 10:39:26 compute-0 podman[96137]: 2025-12-15 10:39:26.14451617 +0000 UTC m=+0.549399108 container start 2c62fe1fc4b0b0e3b6b3ad91d9e42fb80d53211dce327f9e3516f56ac2c98b61 (image=quay.io/ceph/ceph:v19, name=wizardly_cartwright, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:39:26 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:26 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:26 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:39:26 compute-0 wizardly_cartwright[96153]: ERROR: invalid flag --daemon-type
Dec 15 10:39:26 compute-0 systemd[1]: libpod-2c62fe1fc4b0b0e3b6b3ad91d9e42fb80d53211dce327f9e3516f56ac2c98b61.scope: Deactivated successfully.
Dec 15 10:39:26 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:26 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e4002290 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:26 compute-0 podman[96137]: 2025-12-15 10:39:26.586252508 +0000 UTC m=+0.991135436 container attach 2c62fe1fc4b0b0e3b6b3ad91d9e42fb80d53211dce327f9e3516f56ac2c98b61 (image=quay.io/ceph/ceph:v19, name=wizardly_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 15 10:39:26 compute-0 podman[96137]: 2025-12-15 10:39:26.586879166 +0000 UTC m=+0.991762094 container died 2c62fe1fc4b0b0e3b6b3ad91d9e42fb80d53211dce327f9e3516f56ac2c98b61 (image=quay.io/ceph/ceph:v19, name=wizardly_cartwright, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 15 10:39:26 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0[95984]: ts=2025-12-15T10:39:26.608Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000658754s
Dec 15 10:39:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-dee70fa102e12b6fbddf8333e30ae50bdcc79b1340e4776f7bbaa667a4376bcc-merged.mount: Deactivated successfully.
Dec 15 10:39:26 compute-0 podman[96137]: 2025-12-15 10:39:26.632124538 +0000 UTC m=+1.037007456 container remove 2c62fe1fc4b0b0e3b6b3ad91d9e42fb80d53211dce327f9e3516f56ac2c98b61 (image=quay.io/ceph/ceph:v19, name=wizardly_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:39:26 compute-0 systemd[1]: libpod-conmon-2c62fe1fc4b0b0e3b6b3ad91d9e42fb80d53211dce327f9e3516f56ac2c98b61.scope: Deactivated successfully.
Dec 15 10:39:26 compute-0 sudo[96119]: pam_unix(sudo:session): session closed for user root
Dec 15 10:39:26 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.c scrub starts
Dec 15 10:39:26 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.c scrub ok
Dec 15 10:39:27 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v74: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 55 B/s, 7 objects/s recovering
Dec 15 10:39:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Dec 15 10:39:27 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec 15 10:39:27 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:27 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:27 compute-0 ceph-mon[74356]: pgmap v73: 353 pgs: 8 remapped+peering, 345 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 788 B/s, 23 objects/s recovering
Dec 15 10:39:27 compute-0 ceph-mon[74356]: 12.15 scrub starts
Dec 15 10:39:27 compute-0 ceph-mon[74356]: 12.15 scrub ok
Dec 15 10:39:27 compute-0 ceph-mon[74356]: 12.13 scrub starts
Dec 15 10:39:27 compute-0 ceph-mon[74356]: 12.13 scrub ok
Dec 15 10:39:27 compute-0 ceph-mon[74356]: 8.e scrub starts
Dec 15 10:39:27 compute-0 ceph-mon[74356]: 8.e scrub ok
Dec 15 10:39:27 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Dec 15 10:39:27 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 15 10:39:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Dec 15 10:39:27 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Dec 15 10:39:27 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 11.2 deep-scrub starts
Dec 15 10:39:27 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 11.2 deep-scrub ok
Dec 15 10:39:28 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:28 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc0039c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:28 compute-0 ceph-mon[74356]: 12.f scrub starts
Dec 15 10:39:28 compute-0 ceph-mon[74356]: 12.f scrub ok
Dec 15 10:39:28 compute-0 ceph-mon[74356]: 9.3 deep-scrub starts
Dec 15 10:39:28 compute-0 ceph-mon[74356]: 9.3 deep-scrub ok
Dec 15 10:39:28 compute-0 ceph-mon[74356]: 9.c scrub starts
Dec 15 10:39:28 compute-0 ceph-mon[74356]: 9.c scrub ok
Dec 15 10:39:28 compute-0 ceph-mon[74356]: pgmap v74: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 55 B/s, 7 objects/s recovering
Dec 15 10:39:28 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec 15 10:39:28 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 15 10:39:28 compute-0 ceph-mon[74356]: osdmap e70: 3 total, 3 up, 3 in
Dec 15 10:39:28 compute-0 ceph-mon[74356]: 12.d deep-scrub starts
Dec 15 10:39:28 compute-0 ceph-mon[74356]: 12.d deep-scrub ok
Dec 15 10:39:28 compute-0 ceph-mon[74356]: 8.6 scrub starts
Dec 15 10:39:28 compute-0 ceph-mon[74356]: 8.6 scrub ok
Dec 15 10:39:28 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:28 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:28 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Dec 15 10:39:28 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Dec 15 10:39:29 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v76: 353 pgs: 353 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 4 objects/s recovering
Dec 15 10:39:29 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Dec 15 10:39:29 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec 15 10:39:29 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:29 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:29 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Dec 15 10:39:29 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.0 deep-scrub starts
Dec 15 10:39:29 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.0 deep-scrub ok
Dec 15 10:39:30 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:30 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:30 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:30 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0000df0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:30 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 15 10:39:30 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Dec 15 10:39:30 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Dec 15 10:39:30 compute-0 ceph-mon[74356]: 11.2 deep-scrub starts
Dec 15 10:39:30 compute-0 ceph-mon[74356]: 11.2 deep-scrub ok
Dec 15 10:39:30 compute-0 ceph-mon[74356]: 12.5 scrub starts
Dec 15 10:39:30 compute-0 ceph-mon[74356]: 12.5 scrub ok
Dec 15 10:39:30 compute-0 ceph-mon[74356]: 12.2 scrub starts
Dec 15 10:39:30 compute-0 ceph-mon[74356]: 12.2 scrub ok
Dec 15 10:39:30 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec 15 10:39:30 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 71 pg[10.1d( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=66/66 les/c/f=67/67/0 sis=71) [0] r=0 lpr=71 pi=[66,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:30 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 71 pg[10.5( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=66/66 les/c/f=67/67/0 sis=71) [0] r=0 lpr=71 pi=[66,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:30 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 71 pg[10.d( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=66/66 les/c/f=67/67/0 sis=71) [0] r=0 lpr=71 pi=[66,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:30 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 71 pg[10.15( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=66/66 les/c/f=67/67/0 sis=71) [0] r=0 lpr=71 pi=[66,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:30 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Dec 15 10:39:30 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Dec 15 10:39:31 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v78: 353 pgs: 353 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 28 B/s, 4 objects/s recovering
Dec 15 10:39:31 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Dec 15 10:39:31 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec 15 10:39:31 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event e1eba465-bf83-4382-9917-7e259be576d7 (Global Recovery Event) in 5 seconds
Dec 15 10:39:31 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:31 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:31 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:39:31 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Dec 15 10:39:31 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 15 10:39:31 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Dec 15 10:39:31 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Dec 15 10:39:31 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 72 pg[10.16( v 54'1067 (0'0,54'1067] local-lis/les=68/69 n=4 ec=59/47 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=15.993096352s) [1] r=-1 lpr=72 pi=[68,72)/1 crt=54'1067 mlcod 0'0 active pruub 187.037384033s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:31 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 72 pg[10.16( v 54'1067 (0'0,54'1067] local-lis/les=68/69 n=4 ec=59/47 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=15.993057251s) [1] r=-1 lpr=72 pi=[68,72)/1 crt=54'1067 mlcod 0'0 unknown NOTIFY pruub 187.037384033s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:31 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 72 pg[10.15( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=66/66 les/c/f=67/67/0 sis=72) [0]/[2] r=-1 lpr=72 pi=[66,72)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:31 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 72 pg[10.15( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=66/66 les/c/f=67/67/0 sis=72) [0]/[2] r=-1 lpr=72 pi=[66,72)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:31 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 72 pg[10.d( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=66/66 les/c/f=67/67/0 sis=72) [0]/[2] r=-1 lpr=72 pi=[66,72)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:31 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 72 pg[10.d( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=66/66 les/c/f=67/67/0 sis=72) [0]/[2] r=-1 lpr=72 pi=[66,72)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:31 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 72 pg[10.e( v 54'1067 (0'0,54'1067] local-lis/les=68/69 n=6 ec=59/47 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=15.992640495s) [1] r=-1 lpr=72 pi=[68,72)/1 crt=54'1067 mlcod 0'0 active pruub 187.037384033s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:31 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 72 pg[10.e( v 54'1067 (0'0,54'1067] local-lis/les=68/69 n=6 ec=59/47 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=15.992621422s) [1] r=-1 lpr=72 pi=[68,72)/1 crt=54'1067 mlcod 0'0 unknown NOTIFY pruub 187.037384033s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:31 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 72 pg[10.5( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=66/66 les/c/f=67/67/0 sis=72) [0]/[2] r=-1 lpr=72 pi=[66,72)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:31 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 72 pg[10.5( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=66/66 les/c/f=67/67/0 sis=72) [0]/[2] r=-1 lpr=72 pi=[66,72)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:31 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 72 pg[10.6( v 54'1067 (0'0,54'1067] local-lis/les=67/68 n=6 ec=59/47 lis/c=67/67 les/c/f=68/68/0 sis=72 pruub=14.965732574s) [1] r=-1 lpr=72 pi=[67,72)/1 crt=54'1067 mlcod 0'0 active pruub 186.010879517s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:31 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 72 pg[10.6( v 54'1067 (0'0,54'1067] local-lis/les=67/68 n=6 ec=59/47 lis/c=67/67 les/c/f=68/68/0 sis=72 pruub=14.965717316s) [1] r=-1 lpr=72 pi=[67,72)/1 crt=54'1067 mlcod 0'0 unknown NOTIFY pruub 186.010879517s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:31 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 72 pg[10.1d( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=66/66 les/c/f=67/67/0 sis=72) [0]/[2] r=-1 lpr=72 pi=[66,72)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:31 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 72 pg[10.1d( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=66/66 les/c/f=67/67/0 sis=72) [0]/[2] r=-1 lpr=72 pi=[66,72)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:31 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 72 pg[10.1e( v 54'1067 (0'0,54'1067] local-lis/les=67/68 n=5 ec=59/47 lis/c=67/67 les/c/f=68/68/0 sis=72 pruub=14.965396881s) [1] r=-1 lpr=72 pi=[67,72)/1 crt=54'1067 mlcod 0'0 active pruub 186.010894775s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:31 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 72 pg[10.1e( v 54'1067 (0'0,54'1067] local-lis/les=67/68 n=5 ec=59/47 lis/c=67/67 les/c/f=68/68/0 sis=72 pruub=14.965373039s) [1] r=-1 lpr=72 pi=[67,72)/1 crt=54'1067 mlcod 0'0 unknown NOTIFY pruub 186.010894775s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:31 compute-0 ceph-mon[74356]: 8.1 scrub starts
Dec 15 10:39:31 compute-0 ceph-mon[74356]: 8.1 scrub ok
Dec 15 10:39:31 compute-0 ceph-mon[74356]: pgmap v76: 353 pgs: 353 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 4 objects/s recovering
Dec 15 10:39:31 compute-0 ceph-mon[74356]: 12.1f deep-scrub starts
Dec 15 10:39:31 compute-0 ceph-mon[74356]: 12.1f deep-scrub ok
Dec 15 10:39:31 compute-0 ceph-mon[74356]: 9.7 deep-scrub starts
Dec 15 10:39:31 compute-0 ceph-mon[74356]: 9.7 deep-scrub ok
Dec 15 10:39:31 compute-0 ceph-mon[74356]: 9.0 deep-scrub starts
Dec 15 10:39:31 compute-0 ceph-mon[74356]: 9.0 deep-scrub ok
Dec 15 10:39:31 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 15 10:39:31 compute-0 ceph-mon[74356]: osdmap e71: 3 total, 3 up, 3 in
Dec 15 10:39:31 compute-0 ceph-mon[74356]: 12.1b deep-scrub starts
Dec 15 10:39:31 compute-0 ceph-mon[74356]: 12.1b deep-scrub ok
Dec 15 10:39:31 compute-0 ceph-mon[74356]: 11.a deep-scrub starts
Dec 15 10:39:31 compute-0 ceph-mon[74356]: 11.a deep-scrub ok
Dec 15 10:39:31 compute-0 ceph-mon[74356]: 9.1 scrub starts
Dec 15 10:39:31 compute-0 ceph-mon[74356]: 9.1 scrub ok
Dec 15 10:39:31 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec 15 10:39:31 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 15 10:39:31 compute-0 ceph-mon[74356]: osdmap e72: 3 total, 3 up, 3 in
Dec 15 10:39:31 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Dec 15 10:39:31 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Dec 15 10:39:32 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:32 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:32 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Dec 15 10:39:32 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:32 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:32 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Dec 15 10:39:33 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v80: 353 pgs: 4 unknown, 4 remapped+peering, 345 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:39:33 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Dec 15 10:39:33 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Dec 15 10:39:33 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Dec 15 10:39:33 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:33 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0001930 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:33 compute-0 ceph-mon[74356]: pgmap v78: 353 pgs: 353 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 28 B/s, 4 objects/s recovering
Dec 15 10:39:33 compute-0 ceph-mon[74356]: 12.1a scrub starts
Dec 15 10:39:33 compute-0 ceph-mon[74356]: 12.1a scrub ok
Dec 15 10:39:33 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 73 pg[10.6( v 54'1067 (0'0,54'1067] local-lis/les=67/68 n=6 ec=59/47 lis/c=67/67 les/c/f=68/68/0 sis=73) [1]/[0] r=0 lpr=73 pi=[67,73)/1 crt=54'1067 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:33 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 73 pg[10.1e( v 54'1067 (0'0,54'1067] local-lis/les=67/68 n=5 ec=59/47 lis/c=67/67 les/c/f=68/68/0 sis=73) [1]/[0] r=0 lpr=73 pi=[67,73)/1 crt=54'1067 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:33 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 73 pg[10.1e( v 54'1067 (0'0,54'1067] local-lis/les=67/68 n=5 ec=59/47 lis/c=67/67 les/c/f=68/68/0 sis=73) [1]/[0] r=0 lpr=73 pi=[67,73)/1 crt=54'1067 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:33 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 73 pg[10.16( v 54'1067 (0'0,54'1067] local-lis/les=68/69 n=4 ec=59/47 lis/c=68/68 les/c/f=69/69/0 sis=73) [1]/[0] r=0 lpr=73 pi=[68,73)/1 crt=54'1067 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:33 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 73 pg[10.e( v 54'1067 (0'0,54'1067] local-lis/les=68/69 n=6 ec=59/47 lis/c=68/68 les/c/f=69/69/0 sis=73) [1]/[0] r=0 lpr=73 pi=[68,73)/1 crt=54'1067 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:33 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 73 pg[10.16( v 54'1067 (0'0,54'1067] local-lis/les=68/69 n=4 ec=59/47 lis/c=68/68 les/c/f=69/69/0 sis=73) [1]/[0] r=0 lpr=73 pi=[68,73)/1 crt=54'1067 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:33 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 73 pg[10.e( v 54'1067 (0'0,54'1067] local-lis/les=68/69 n=6 ec=59/47 lis/c=68/68 les/c/f=69/69/0 sis=73) [1]/[0] r=0 lpr=73 pi=[68,73)/1 crt=54'1067 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:33 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 73 pg[10.6( v 54'1067 (0'0,54'1067] local-lis/les=67/68 n=6 ec=59/47 lis/c=67/67 les/c/f=68/68/0 sis=73) [1]/[0] r=0 lpr=73 pi=[67,73)/1 crt=54'1067 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:33 compute-0 podman[96120]: 2025-12-15 10:39:33.43427884 +0000 UTC m=+8.007626276 container create 94261fb868fe3a509c2390fb9823cd54c378009b7735a3969eead68c7ea3cd07 (image=quay.io/ceph/grafana:10.4.0, name=charming_swanson, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:39:33 compute-0 podman[96120]: 2025-12-15 10:39:33.416185392 +0000 UTC m=+7.989532878 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec 15 10:39:33 compute-0 systemd[1]: Started libpod-conmon-94261fb868fe3a509c2390fb9823cd54c378009b7735a3969eead68c7ea3cd07.scope.
Dec 15 10:39:33 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:39:33 compute-0 podman[96120]: 2025-12-15 10:39:33.511331162 +0000 UTC m=+8.084678638 container init 94261fb868fe3a509c2390fb9823cd54c378009b7735a3969eead68c7ea3cd07 (image=quay.io/ceph/grafana:10.4.0, name=charming_swanson, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:39:33 compute-0 podman[96120]: 2025-12-15 10:39:33.517460102 +0000 UTC m=+8.090807548 container start 94261fb868fe3a509c2390fb9823cd54c378009b7735a3969eead68c7ea3cd07 (image=quay.io/ceph/grafana:10.4.0, name=charming_swanson, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:39:33 compute-0 charming_swanson[96397]: 472 0
Dec 15 10:39:33 compute-0 systemd[1]: libpod-94261fb868fe3a509c2390fb9823cd54c378009b7735a3969eead68c7ea3cd07.scope: Deactivated successfully.
Dec 15 10:39:33 compute-0 podman[96120]: 2025-12-15 10:39:33.523668382 +0000 UTC m=+8.097015828 container attach 94261fb868fe3a509c2390fb9823cd54c378009b7735a3969eead68c7ea3cd07 (image=quay.io/ceph/grafana:10.4.0, name=charming_swanson, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:39:33 compute-0 podman[96120]: 2025-12-15 10:39:33.523980091 +0000 UTC m=+8.097327537 container died 94261fb868fe3a509c2390fb9823cd54c378009b7735a3969eead68c7ea3cd07 (image=quay.io/ceph/grafana:10.4.0, name=charming_swanson, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:39:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ba94807b87660d0c7e6edeebc69f0e5942b3f2e72a087d80166d9ccaec0f044-merged.mount: Deactivated successfully.
Dec 15 10:39:33 compute-0 podman[96120]: 2025-12-15 10:39:33.556393039 +0000 UTC m=+8.129740485 container remove 94261fb868fe3a509c2390fb9823cd54c378009b7735a3969eead68c7ea3cd07 (image=quay.io/ceph/grafana:10.4.0, name=charming_swanson, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:39:33 compute-0 systemd[1]: libpod-conmon-94261fb868fe3a509c2390fb9823cd54c378009b7735a3969eead68c7ea3cd07.scope: Deactivated successfully.
Dec 15 10:39:33 compute-0 podman[96412]: 2025-12-15 10:39:33.624926081 +0000 UTC m=+0.048515088 container create e374d5bef6c2f50d4e0c594b4351c58eba57cad931712644084b0d171ad951d2 (image=quay.io/ceph/grafana:10.4.0, name=mystifying_engelbart, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:39:33 compute-0 systemd[1]: Started libpod-conmon-e374d5bef6c2f50d4e0c594b4351c58eba57cad931712644084b0d171ad951d2.scope.
Dec 15 10:39:33 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:39:33 compute-0 podman[96412]: 2025-12-15 10:39:33.680412842 +0000 UTC m=+0.104001829 container init e374d5bef6c2f50d4e0c594b4351c58eba57cad931712644084b0d171ad951d2 (image=quay.io/ceph/grafana:10.4.0, name=mystifying_engelbart, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:39:33 compute-0 podman[96412]: 2025-12-15 10:39:33.685879182 +0000 UTC m=+0.109468169 container start e374d5bef6c2f50d4e0c594b4351c58eba57cad931712644084b0d171ad951d2 (image=quay.io/ceph/grafana:10.4.0, name=mystifying_engelbart, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:39:33 compute-0 mystifying_engelbart[96429]: 472 0
Dec 15 10:39:33 compute-0 podman[96412]: 2025-12-15 10:39:33.689430476 +0000 UTC m=+0.113019493 container attach e374d5bef6c2f50d4e0c594b4351c58eba57cad931712644084b0d171ad951d2 (image=quay.io/ceph/grafana:10.4.0, name=mystifying_engelbart, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:39:33 compute-0 systemd[1]: libpod-e374d5bef6c2f50d4e0c594b4351c58eba57cad931712644084b0d171ad951d2.scope: Deactivated successfully.
Dec 15 10:39:33 compute-0 podman[96412]: 2025-12-15 10:39:33.60330907 +0000 UTC m=+0.026898077 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec 15 10:39:33 compute-0 podman[96412]: 2025-12-15 10:39:33.698049828 +0000 UTC m=+0.121638815 container died e374d5bef6c2f50d4e0c594b4351c58eba57cad931712644084b0d171ad951d2 (image=quay.io/ceph/grafana:10.4.0, name=mystifying_engelbart, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:39:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc8bdb55705786486fef5dcdea6ebf5428e663d2670ea18e1f592fc9f38b6622-merged.mount: Deactivated successfully.
Dec 15 10:39:33 compute-0 podman[96412]: 2025-12-15 10:39:33.73475761 +0000 UTC m=+0.158346597 container remove e374d5bef6c2f50d4e0c594b4351c58eba57cad931712644084b0d171ad951d2 (image=quay.io/ceph/grafana:10.4.0, name=mystifying_engelbart, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:39:33 compute-0 systemd[1]: libpod-conmon-e374d5bef6c2f50d4e0c594b4351c58eba57cad931712644084b0d171ad951d2.scope: Deactivated successfully.
Dec 15 10:39:33 compute-0 systemd[1]: Reloading.
Dec 15 10:39:33 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 11.6 deep-scrub starts
Dec 15 10:39:33 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 11.6 deep-scrub ok
Dec 15 10:39:33 compute-0 systemd-sysv-generator[96475]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:39:33 compute-0 systemd-rc-local-generator[96469]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:39:34 compute-0 systemd[1]: Reloading.
Dec 15 10:39:34 compute-0 systemd-rc-local-generator[96512]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:39:34 compute-0 systemd-sysv-generator[96516]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:39:34 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Dec 15 10:39:34 compute-0 ceph-mon[74356]: 8.0 scrub starts
Dec 15 10:39:34 compute-0 ceph-mon[74356]: 8.0 scrub ok
Dec 15 10:39:34 compute-0 ceph-mon[74356]: 9.8 deep-scrub starts
Dec 15 10:39:34 compute-0 ceph-mon[74356]: 9.8 deep-scrub ok
Dec 15 10:39:34 compute-0 ceph-mon[74356]: 8.7 scrub starts
Dec 15 10:39:34 compute-0 ceph-mon[74356]: pgmap v80: 353 pgs: 4 unknown, 4 remapped+peering, 345 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:39:34 compute-0 ceph-mon[74356]: 8.7 scrub ok
Dec 15 10:39:34 compute-0 ceph-mon[74356]: osdmap e73: 3 total, 3 up, 3 in
Dec 15 10:39:34 compute-0 ceph-mon[74356]: 8.1f scrub starts
Dec 15 10:39:34 compute-0 ceph-mon[74356]: 8.1f scrub ok
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:34 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:34 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Dec 15 10:39:34 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Dec 15 10:39:34 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 74 pg[10.15( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=4 ec=59/47 lis/c=72/66 les/c/f=73/67/0 sis=74) [0] r=0 lpr=74 pi=[66,74)/1 luod=0'0 crt=54'1067 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:34 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 74 pg[10.15( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=4 ec=59/47 lis/c=72/66 les/c/f=73/67/0 sis=74) [0] r=0 lpr=74 pi=[66,74)/1 crt=54'1067 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:34 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 74 pg[10.d( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=6 ec=59/47 lis/c=72/66 les/c/f=73/67/0 sis=74) [0] r=0 lpr=74 pi=[66,74)/1 luod=0'0 crt=54'1067 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:34 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 74 pg[10.d( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=6 ec=59/47 lis/c=72/66 les/c/f=73/67/0 sis=74) [0] r=0 lpr=74 pi=[66,74)/1 crt=54'1067 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:34 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 74 pg[10.5( v 73'1081 (0'0,73'1081] local-lis/les=0/0 n=6 ec=59/47 lis/c=72/66 les/c/f=73/67/0 sis=74) [0] r=0 lpr=74 pi=[66,74)/1 luod=0'0 crt=69'1078 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:34 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 74 pg[10.5( v 73'1081 (0'0,73'1081] local-lis/les=0/0 n=6 ec=59/47 lis/c=72/66 les/c/f=73/67/0 sis=74) [0] r=0 lpr=74 pi=[66,74)/1 crt=69'1078 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:34 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 74 pg[10.1d( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=5 ec=59/47 lis/c=72/66 les/c/f=73/67/0 sis=74) [0] r=0 lpr=74 pi=[66,74)/1 luod=0'0 crt=54'1067 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:34 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 74 pg[10.1d( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=5 ec=59/47 lis/c=72/66 les/c/f=73/67/0 sis=74) [0] r=0 lpr=74 pi=[66,74)/1 crt=54'1067 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:34 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc0042e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:34 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for 77365f67-614e-5a8d-b658-640395550c79...
Dec 15 10:39:34 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 74 pg[10.1e( v 54'1067 (0'0,54'1067] local-lis/les=73/74 n=5 ec=59/47 lis/c=67/67 les/c/f=68/68/0 sis=73) [1]/[0] async=[1] r=0 lpr=73 pi=[67,73)/1 crt=54'1067 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:34 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 74 pg[10.e( v 54'1067 (0'0,54'1067] local-lis/les=73/74 n=6 ec=59/47 lis/c=68/68 les/c/f=69/69/0 sis=73) [1]/[0] async=[1] r=0 lpr=73 pi=[68,73)/1 crt=54'1067 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:34 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 74 pg[10.6( v 54'1067 (0'0,54'1067] local-lis/les=73/74 n=6 ec=59/47 lis/c=67/67 les/c/f=68/68/0 sis=73) [1]/[0] async=[1] r=0 lpr=73 pi=[67,73)/1 crt=54'1067 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:34 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 74 pg[10.16( v 54'1067 (0'0,54'1067] local-lis/les=73/74 n=4 ec=59/47 lis/c=68/68 les/c/f=69/69/0 sis=73) [1]/[0] async=[1] r=0 lpr=73 pi=[68,73)/1 crt=54'1067 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0[95984]: ts=2025-12-15T10:39:34.610Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003353555s
Dec 15 10:39:34 compute-0 podman[96571]: 2025-12-15 10:39:34.657274776 +0000 UTC m=+0.044636496 container create 23d65a61651e1c0b44eb1273a6940cc9a6f8c6b86de8d6db3e4162bb6add8e05 (image=quay.io/ceph/grafana:10.4.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:39:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4623f86540b7075657d1cfaedf01c2719b281ffe7620ca0d63bd363343d6fe/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:39:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4623f86540b7075657d1cfaedf01c2719b281ffe7620ca0d63bd363343d6fe/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Dec 15 10:39:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4623f86540b7075657d1cfaedf01c2719b281ffe7620ca0d63bd363343d6fe/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Dec 15 10:39:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4623f86540b7075657d1cfaedf01c2719b281ffe7620ca0d63bd363343d6fe/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Dec 15 10:39:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4623f86540b7075657d1cfaedf01c2719b281ffe7620ca0d63bd363343d6fe/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Dec 15 10:39:34 compute-0 podman[96571]: 2025-12-15 10:39:34.71560807 +0000 UTC m=+0.102969820 container init 23d65a61651e1c0b44eb1273a6940cc9a6f8c6b86de8d6db3e4162bb6add8e05 (image=quay.io/ceph/grafana:10.4.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:39:34 compute-0 podman[96571]: 2025-12-15 10:39:34.721975966 +0000 UTC m=+0.109337686 container start 23d65a61651e1c0b44eb1273a6940cc9a6f8c6b86de8d6db3e4162bb6add8e05 (image=quay.io/ceph/grafana:10.4.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:39:34 compute-0 bash[96571]: 23d65a61651e1c0b44eb1273a6940cc9a6f8c6b86de8d6db3e4162bb6add8e05
Dec 15 10:39:34 compute-0 podman[96571]: 2025-12-15 10:39:34.637257401 +0000 UTC m=+0.024619151 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec 15 10:39:34 compute-0 systemd[1]: Started Ceph grafana.compute-0 for 77365f67-614e-5a8d-b658-640395550c79.
Dec 15 10:39:34 compute-0 sudo[96030]: pam_unix(sudo:session): session closed for user root
Dec 15 10:39:34 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:39:34 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:34 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:39:34 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:34 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec 15 10:39:34 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:34 compute-0 ceph-mgr[74651]: [progress INFO root] complete: finished ev 4f834d59-8431-4f9b-b3d8-1e57c8dde683 (Updating grafana deployment (+1 -> 1))
Dec 15 10:39:34 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event 4f834d59-8431-4f9b-b3d8-1e57c8dde683 (Updating grafana deployment (+1 -> 1)) in 10 seconds
Dec 15 10:39:34 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec 15 10:39:34 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:34 compute-0 ceph-mgr[74651]: [progress INFO root] update: starting ev 323c44e5-c5b0-419d-b193-76474c7238a9 (Updating ingress.rgw.default deployment (+4 -> 4))
Dec 15 10:39:34 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Dec 15 10:39:34 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:34 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.vdqmne on compute-0
Dec 15 10:39:34 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.vdqmne on compute-0
Dec 15 10:39:34 compute-0 sudo[96605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=settings t=2025-12-15T10:39:34.904219261Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-12-15T10:39:34Z
Dec 15 10:39:34 compute-0 sudo[96605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=settings t=2025-12-15T10:39:34.904589042Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=settings t=2025-12-15T10:39:34.904603262Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=settings t=2025-12-15T10:39:34.904607852Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=settings t=2025-12-15T10:39:34.904612102Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=settings t=2025-12-15T10:39:34.904615932Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=settings t=2025-12-15T10:39:34.904620003Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=settings t=2025-12-15T10:39:34.904624323Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=settings t=2025-12-15T10:39:34.904628813Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=settings t=2025-12-15T10:39:34.904635953Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=settings t=2025-12-15T10:39:34.904639653Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=settings t=2025-12-15T10:39:34.904643413Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=settings t=2025-12-15T10:39:34.904647613Z level=info msg=Target target=[all]
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=settings t=2025-12-15T10:39:34.904655924Z level=info msg="Path Home" path=/usr/share/grafana
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=settings t=2025-12-15T10:39:34.904660784Z level=info msg="Path Data" path=/var/lib/grafana
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=settings t=2025-12-15T10:39:34.904664644Z level=info msg="Path Logs" path=/var/log/grafana
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=settings t=2025-12-15T10:39:34.904671024Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=settings t=2025-12-15T10:39:34.904674914Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=settings t=2025-12-15T10:39:34.904678554Z level=info msg="App mode production"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=sqlstore t=2025-12-15T10:39:34.906108037Z level=info msg="Connecting to DB" dbtype=sqlite3
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=sqlstore t=2025-12-15T10:39:34.906148108Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.907243529Z level=info msg="Starting DB migrations"
Dec 15 10:39:34 compute-0 sudo[96605]: pam_unix(sudo:session): session closed for user root
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.908594619Z level=info msg="Executing migration" id="create migration_log table"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.909638489Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.04343ms
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.911834694Z level=info msg="Executing migration" id="create user table"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.912655117Z level=info msg="Migration successfully executed" id="create user table" duration=821.443µs
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.914715728Z level=info msg="Executing migration" id="add unique index user.login"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.915391137Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=679.149µs
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.917973642Z level=info msg="Executing migration" id="add unique index user.email"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.918854959Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=883.876µs
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.921758423Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.922440974Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=683.761µs
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.925677568Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.926717498Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.03481ms
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.929050746Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.931775566Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.72387ms
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.933800296Z level=info msg="Executing migration" id="create user table v2"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.934612429Z level=info msg="Migration successfully executed" id="create user table v2" duration=811.232µs
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.93774056Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.938622356Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=884.286µs
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.942788938Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.94354678Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=759.442µs
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.947492006Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.94797677Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=485.044µs
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.949651968Z level=info msg="Executing migration" id="Drop old table user_v1"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.950223935Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=571.237µs
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.952637056Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.953844851Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.209415ms
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.956455058Z level=info msg="Executing migration" id="Update user table charset"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.95653737Z level=info msg="Migration successfully executed" id="Update user table charset" duration=83.592µs
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.958621581Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.959898938Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.277396ms
Dec 15 10:39:34 compute-0 sudo[96630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.961828325Z level=info msg="Executing migration" id="Add missing user data"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.96270989Z level=info msg="Migration successfully executed" id="Add missing user data" duration=884.266µs
Dec 15 10:39:34 compute-0 sudo[96630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.965380698Z level=info msg="Executing migration" id="Add is_disabled column to user"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.966340776Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=960.008µs
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.968714305Z level=info msg="Executing migration" id="Add index user.login/user.email"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.969363624Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=648.599µs
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.971151536Z level=info msg="Executing migration" id="Add is_service_account column to user"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.972048783Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=897.117µs
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.973941918Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.980149999Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=6.207811ms
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.982549269Z level=info msg="Executing migration" id="Add uid column to user"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.983513628Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=964.149µs
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.986033432Z level=info msg="Executing migration" id="Update uid column values for users"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.986289739Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=256.398µs
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.989829393Z level=info msg="Executing migration" id="Add unique index user_uid"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.990551623Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=722.57µs
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.992623664Z level=info msg="Executing migration" id="create temp user table v1-7"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.993344935Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=719.341µs
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.995653823Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.996290241Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=636.308µs
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.997990331Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Dec 15 10:39:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:34.998599279Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=608.768µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.001892725Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.002546224Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=653.829µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.004774869Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.00547919Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=704.151µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.007373885Z level=info msg="Executing migration" id="Update temp_user table charset"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.007435006Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=61.421µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.010942329Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.011624659Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=682.18µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.013967178Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.014580736Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=613.688µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.017396518Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.01814782Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=752.132µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.020416946Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.021141688Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=724.432µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.024731542Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.028380019Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.645867ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.030914293Z level=info msg="Executing migration" id="create temp_user v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.031769698Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=854.905µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.033528919Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.034144417Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=615.338µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.035918769Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.03663714Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=715.891µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.04109171Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.041901644Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=810.404µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.044069627Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.044750437Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=680.72µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.046710434Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.047084925Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=372.421µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.049339861Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.049861906Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=523.635µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.051721401Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.052087341Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=366.03µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.053929576Z level=info msg="Executing migration" id="create star table"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.054508533Z level=info msg="Migration successfully executed" id="create star table" duration=576.037µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.056672126Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.057531141Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=859.215µs
Dec 15 10:39:35 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v83: 353 pgs: 4 unknown, 4 remapped+peering, 345 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.060334973Z level=info msg="Executing migration" id="create org table v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.061256819Z level=info msg="Migration successfully executed" id="create org table v1" duration=923.206µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.063279488Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.063980169Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=699.281µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.066211494Z level=info msg="Executing migration" id="create org_user table v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.066878304Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=664.139µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.068618355Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.069350496Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=732.791µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.071390296Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.072037974Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=647.568µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.076171905Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.077102693Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=933.278µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.079225065Z level=info msg="Executing migration" id="Update org table charset"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.079286427Z level=info msg="Migration successfully executed" id="Update org table charset" duration=61.342µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.082094699Z level=info msg="Executing migration" id="Update org_user table charset"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.082239133Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=146.305µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.084563411Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.084789817Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=226.846µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.08659927Z level=info msg="Executing migration" id="create dashboard table"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.087394533Z level=info msg="Migration successfully executed" id="create dashboard table" duration=795.033µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.089372921Z level=info msg="Executing migration" id="add index dashboard.account_id"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.090124303Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=751.312µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.091912055Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.092705999Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=793.893µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.095519771Z level=info msg="Executing migration" id="create dashboard_tag table"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.096845569Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.331339ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.09996522Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.101184836Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.220976ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.103228896Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.104104842Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=876.016µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.106240834Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Dec 15 10:39:35 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:39:35 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.113102055Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=6.85322ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.116260577Z level=info msg="Executing migration" id="create dashboard v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.117776451Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=1.517164ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.120217622Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.121649585Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.431822ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.12389383Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Dec 15 10:39:35 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:39:35 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.125897468Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.990658ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.128542915Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.12937334Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=832.215µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.132033328Z level=info msg="Executing migration" id="drop table dashboard_v1"
Dec 15 10:39:35 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:39:35 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.133543221Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.510933ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.135956353Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.136136107Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=177.864µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.138690723Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.140709561Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.018749ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.143612266Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.145490531Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.879695ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.147457058Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.149145038Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.68844ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.150838777Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.151583239Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=744.772µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.15334279Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.154831624Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.488704ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.157546413Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.158351387Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=805.454µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.160374956Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.161154399Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=780.433µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.163042364Z level=info msg="Executing migration" id="Update dashboard table charset"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.163114866Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=75.732µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.165138855Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.165226097Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=81.312µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.166837574Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.168426971Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.587627ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.170143281Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.171661386Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.518115ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.173697315Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.175276241Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.579216ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.177312031Z level=info msg="Executing migration" id="Add column uid in dashboard"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.178798054Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.483303ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.180474323Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.18068871Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=214.807µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.182523403Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.183156872Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=633.749µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.184603063Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.185306074Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=702.831µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.186914561Z level=info msg="Executing migration" id="Update dashboard title length"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.187029464Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=104.323µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.188469457Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.18927524Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=805.123µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.190896248Z level=info msg="Executing migration" id="create dashboard_provisioning"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.191577698Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=681.53µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.19438218Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.198472489Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=4.087378ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.20057176Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.20125991Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=687.96µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.203906867Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.204598728Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=691.581µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.20641238Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.207106661Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=694.471µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.210278054Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.210620424Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=342.76µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.212468418Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.213107827Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=639.688µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.215695972Z level=info msg="Executing migration" id="Add check_sum column"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.217313449Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.617297ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.219556735Z level=info msg="Executing migration" id="Add index for dashboard_title"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.220222524Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=665.809µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.222137951Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.222338247Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=199.625µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.224453648Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.224636853Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=184.145µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.226495967Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.227177398Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=681.411µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.229423353Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.233730169Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=4.301606ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.236260163Z level=info msg="Executing migration" id="create data_source table"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.237316784Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.05628ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.239522519Z level=info msg="Executing migration" id="add index data_source.account_id"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:35 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.240143886Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=621.117µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.241993751Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.242591697Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=597.736µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.244458403Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.245106832Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=648.169µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.246721709Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.247521092Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=798.603µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.249315804Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.255544387Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.225092ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.257549925Z level=info msg="Executing migration" id="create data_source table v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.258835443Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.284458ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.260934784Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.261628255Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=690.6µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.263576951Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.264289292Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=711.671µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.26593012Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.26662941Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=701.23µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.26834211Z level=info msg="Executing migration" id="Add column with_credentials"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.270243776Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.902306ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.271788801Z level=info msg="Executing migration" id="Add secure json data column"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.273561833Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=1.770991ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.274946173Z level=info msg="Executing migration" id="Update data_source table charset"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.274977594Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=32.301µs
Dec 15 10:39:35 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.278112256Z level=info msg="Executing migration" id="Update initial version to 1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.278417435Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=306.399µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.280110904Z level=info msg="Executing migration" id="Add read_only data column"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.28237824Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.266586ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.284550474Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.284811571Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=260.928µs
Dec 15 10:39:35 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.287027117Z level=info msg="Executing migration" id="Update json_data with nulls"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.287283473Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=255.766µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.289742946Z level=info msg="Executing migration" id="Add uid column"
Dec 15 10:39:35 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.293297289Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.553423ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.295434262Z level=info msg="Executing migration" id="Update uid value"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.295605656Z level=info msg="Migration successfully executed" id="Update uid value" duration=172.114µs
Dec 15 10:39:35 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 75 pg[10.16( v 54'1067 (0'0,54'1067] local-lis/les=73/74 n=4 ec=59/47 lis/c=73/68 les/c/f=74/69/0 sis=75 pruub=15.205549240s) [1] async=[1] r=-1 lpr=75 pi=[68,75)/1 crt=54'1067 mlcod 54'1067 active pruub 190.176147461s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:35 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 75 pg[10.16( v 54'1067 (0'0,54'1067] local-lis/les=73/74 n=4 ec=59/47 lis/c=73/68 les/c/f=74/69/0 sis=75 pruub=15.205467224s) [1] r=-1 lpr=75 pi=[68,75)/1 crt=54'1067 mlcod 0'0 unknown NOTIFY pruub 190.176147461s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:35 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 75 pg[10.e( v 54'1067 (0'0,54'1067] local-lis/les=73/74 n=6 ec=59/47 lis/c=73/68 les/c/f=74/69/0 sis=75 pruub=15.201200485s) [1] async=[1] r=-1 lpr=75 pi=[68,75)/1 crt=54'1067 mlcod 54'1067 active pruub 190.172271729s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:35 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 75 pg[10.e( v 54'1067 (0'0,54'1067] local-lis/les=73/74 n=6 ec=59/47 lis/c=73/68 les/c/f=74/69/0 sis=75 pruub=15.201164246s) [1] r=-1 lpr=75 pi=[68,75)/1 crt=54'1067 mlcod 0'0 unknown NOTIFY pruub 190.172271729s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:35 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 75 pg[10.6( v 54'1067 (0'0,54'1067] local-lis/les=73/74 n=6 ec=59/47 lis/c=73/67 les/c/f=74/68/0 sis=75 pruub=15.204735756s) [1] async=[1] r=-1 lpr=75 pi=[67,75)/1 crt=54'1067 mlcod 54'1067 active pruub 190.176147461s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:35 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 75 pg[10.6( v 54'1067 (0'0,54'1067] local-lis/les=73/74 n=6 ec=59/47 lis/c=73/67 les/c/f=74/68/0 sis=75 pruub=15.204686165s) [1] r=-1 lpr=75 pi=[67,75)/1 crt=54'1067 mlcod 0'0 unknown NOTIFY pruub 190.176147461s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:35 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 75 pg[10.1e( v 54'1067 (0'0,54'1067] local-lis/les=73/74 n=5 ec=59/47 lis/c=73/67 les/c/f=74/68/0 sis=75 pruub=15.199955940s) [1] async=[1] r=-1 lpr=75 pi=[67,75)/1 crt=54'1067 mlcod 54'1067 active pruub 190.171707153s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.297565254Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Dec 15 10:39:35 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 75 pg[10.1e( v 54'1067 (0'0,54'1067] local-lis/les=73/74 n=5 ec=59/47 lis/c=73/67 les/c/f=74/68/0 sis=75 pruub=15.199919701s) [1] r=-1 lpr=75 pi=[67,75)/1 crt=54'1067 mlcod 0'0 unknown NOTIFY pruub 190.171707153s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.298327457Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=761.893µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.301101128Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.301791157Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=690.499µs
Dec 15 10:39:35 compute-0 ceph-mon[74356]: 11.6 deep-scrub starts
Dec 15 10:39:35 compute-0 ceph-mon[74356]: 11.6 deep-scrub ok
Dec 15 10:39:35 compute-0 ceph-mon[74356]: osdmap e74: 3 total, 3 up, 3 in
Dec 15 10:39:35 compute-0 ceph-mon[74356]: 8.9 scrub starts
Dec 15 10:39:35 compute-0 ceph-mon[74356]: 8.9 scrub ok
Dec 15 10:39:35 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:35 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:35 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:35 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:35 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:35 compute-0 ceph-mon[74356]: Deploying daemon haproxy.rgw.default.compute-0.vdqmne on compute-0
Dec 15 10:39:35 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 75 pg[10.15( v 54'1067 (0'0,54'1067] local-lis/les=74/75 n=4 ec=59/47 lis/c=72/66 les/c/f=73/67/0 sis=74) [0] r=0 lpr=74 pi=[66,74)/1 crt=54'1067 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:35 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 75 pg[10.d( v 54'1067 (0'0,54'1067] local-lis/les=74/75 n=6 ec=59/47 lis/c=72/66 les/c/f=73/67/0 sis=74) [0] r=0 lpr=74 pi=[66,74)/1 crt=54'1067 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:35 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 75 pg[10.5( v 73'1081 (0'0,73'1081] local-lis/les=74/75 n=6 ec=59/47 lis/c=72/66 les/c/f=73/67/0 sis=74) [0] r=0 lpr=74 pi=[66,74)/1 crt=73'1081 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:35 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 75 pg[10.1d( v 54'1067 (0'0,54'1067] local-lis/les=74/75 n=5 ec=59/47 lis/c=72/66 les/c/f=73/67/0 sis=74) [0] r=0 lpr=74 pi=[66,74)/1 crt=54'1067 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.303873999Z level=info msg="Executing migration" id="create api_key table"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.304716953Z level=info msg="Migration successfully executed" id="create api_key table" duration=843.214µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.308215445Z level=info msg="Executing migration" id="add index api_key.account_id"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.309218225Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.00628ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.3111214Z level=info msg="Executing migration" id="add index api_key.key"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.311755089Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=633.709µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.313340065Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.313983464Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=643.239µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.315447176Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.316414525Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=966.979µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.319151375Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.319900407Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=748.952µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.321413861Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.322024509Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=610.458µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.324164731Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Dec 15 10:39:35 compute-0 podman[96693]: 2025-12-15 10:39:35.325067308 +0000 UTC m=+0.043791690 container create 07559d41464ca16892ac29943f1f46cea84bdc1aaed97475a622ab22f4707c68 (image=quay.io/ceph/haproxy:2.3, name=youthful_davinci)
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.32889276Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=4.725539ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.332611818Z level=info msg="Executing migration" id="create api_key table v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.333673529Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=1.062161ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.335571855Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.336209693Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=640.229µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.338319485Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.338973544Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=654.419µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.342397464Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.343063304Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=666.23µs
Dec 15 10:39:35 compute-0 systemd[1]: Started libpod-conmon-07559d41464ca16892ac29943f1f46cea84bdc1aaed97475a622ab22f4707c68.scope.
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.361231304Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.361712349Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=482.355µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.364042787Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.364583883Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=540.596µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.366593801Z level=info msg="Executing migration" id="Update api_key table charset"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.366612542Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=19.261µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.368541448Z level=info msg="Executing migration" id="Add expires to api_key table"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.370524946Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=1.982708ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.375368538Z level=info msg="Executing migration" id="Add service account foreign key"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.377754887Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.38792ms
Dec 15 10:39:35 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:39:35 compute-0 podman[96693]: 2025-12-15 10:39:35.306278339 +0000 UTC m=+0.025002751 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.411078331Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.411349579Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=274.388µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.414335396Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.416431377Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.095531ms
Dec 15 10:39:35 compute-0 podman[96693]: 2025-12-15 10:39:35.418140928 +0000 UTC m=+0.136865330 container init 07559d41464ca16892ac29943f1f46cea84bdc1aaed97475a622ab22f4707c68 (image=quay.io/ceph/haproxy:2.3, name=youthful_davinci)
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.421508066Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.423461433Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=1.954237ms
Dec 15 10:39:35 compute-0 podman[96693]: 2025-12-15 10:39:35.426688887 +0000 UTC m=+0.145413269 container start 07559d41464ca16892ac29943f1f46cea84bdc1aaed97475a622ab22f4707c68 (image=quay.io/ceph/haproxy:2.3, name=youthful_davinci)
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.426899764Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.427588653Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=688.949µs
Dec 15 10:39:35 compute-0 youthful_davinci[96710]: 0 0
Dec 15 10:39:35 compute-0 systemd[1]: libpod-07559d41464ca16892ac29943f1f46cea84bdc1aaed97475a622ab22f4707c68.scope: Deactivated successfully.
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.432329362Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.433423514Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=1.094822ms
Dec 15 10:39:35 compute-0 podman[96693]: 2025-12-15 10:39:35.434486935 +0000 UTC m=+0.153211327 container attach 07559d41464ca16892ac29943f1f46cea84bdc1aaed97475a622ab22f4707c68 (image=quay.io/ceph/haproxy:2.3, name=youthful_davinci)
Dec 15 10:39:35 compute-0 podman[96693]: 2025-12-15 10:39:35.434806524 +0000 UTC m=+0.153530906 container died 07559d41464ca16892ac29943f1f46cea84bdc1aaed97475a622ab22f4707c68 (image=quay.io/ceph/haproxy:2.3, name=youthful_davinci)
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.437141903Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.438004778Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=862.805µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.441827669Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.442721466Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=892.626µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.448872656Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.449685139Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=812.493µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.488360879Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.490614035Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=2.268886ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.494438046Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.494504918Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=67.942µs
Dec 15 10:39:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ae6b185c0b232d27abb4908aa7a9b48537ea8d88b5b882bc4821e84a51be4bc-merged.mount: Deactivated successfully.
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.49659747Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.496658582Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=64.872µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.499932678Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.503996726Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.062238ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.506043635Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.508560119Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.514324ms
Dec 15 10:39:35 compute-0 podman[96693]: 2025-12-15 10:39:35.509412324 +0000 UTC m=+0.228136706 container remove 07559d41464ca16892ac29943f1f46cea84bdc1aaed97475a622ab22f4707c68 (image=quay.io/ceph/haproxy:2.3, name=youthful_davinci)
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.510491636Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.510557598Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=66.932µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.514826022Z level=info msg="Executing migration" id="create quota table v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.515656527Z level=info msg="Migration successfully executed" id="create quota table v1" duration=831.535µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.517659995Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.518555722Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=897.356µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.522070124Z level=info msg="Executing migration" id="Update quota table charset"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.522092275Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=22.961µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.524304599Z level=info msg="Executing migration" id="create plugin_setting table"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.525098913Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=794.504µs
Dec 15 10:39:35 compute-0 systemd[1]: libpod-conmon-07559d41464ca16892ac29943f1f46cea84bdc1aaed97475a622ab22f4707c68.scope: Deactivated successfully.
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.527128132Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.527868814Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=740.212µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.531652714Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.534436215Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.782911ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.536345692Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.536370961Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=29.119µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.539124543Z level=info msg="Executing migration" id="create session table"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.540307797Z level=info msg="Migration successfully executed" id="create session table" duration=1.184314ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.624790616Z level=info msg="Executing migration" id="Drop old table playlist table"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.6249332Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=146.915µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.627925207Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.627996149Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=71.752µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.631564243Z level=info msg="Executing migration" id="create playlist table v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.632287434Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=725.421µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.634024856Z level=info msg="Executing migration" id="create playlist item table v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.634736896Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=712.15µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.63828971Z level=info msg="Executing migration" id="Update playlist table charset"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.638321231Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=32.491µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.641305088Z level=info msg="Executing migration" id="Update playlist_item table charset"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.641331429Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=27.611µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.645734317Z level=info msg="Executing migration" id="Add playlist column created_at"
Dec 15 10:39:35 compute-0 systemd[1]: Reloading.
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.649385214Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.649417ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.652758233Z level=info msg="Executing migration" id="Add playlist column updated_at"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.656246805Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.488432ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.658091198Z level=info msg="Executing migration" id="drop preferences table v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.658180811Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=90.403µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.660102007Z level=info msg="Executing migration" id="drop preferences table v3"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.660219Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=114.953µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.663201428Z level=info msg="Executing migration" id="create preferences table v3"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.6639443Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=759.313µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.666340159Z level=info msg="Executing migration" id="Update preferences table charset"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.66636105Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=21.951µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.668937645Z level=info msg="Executing migration" id="Add column team_id in preferences"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.671540121Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=2.601436ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.674378674Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.674524769Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=146.544µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.676122106Z level=info msg="Executing migration" id="Add column week_start in preferences"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.679142194Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.019698ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.681531424Z level=info msg="Executing migration" id="Add column preferences.json_data"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.684617774Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.08554ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.686812808Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.68687178Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=59.192µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.68893358Z level=info msg="Executing migration" id="Add preferences index org_id"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.689829846Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=895.706µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.6916688Z level=info msg="Executing migration" id="Add preferences index user_id"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.692554306Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=884.926µs
Dec 15 10:39:35 compute-0 systemd-sysv-generator[96762]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:39:35 compute-0 systemd-rc-local-generator[96759]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.821463892Z level=info msg="Executing migration" id="create alert table v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.824482901Z level=info msg="Migration successfully executed" id="create alert table v1" duration=3.021499ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.830753303Z level=info msg="Executing migration" id="add index alert org_id & id "
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.832048821Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.298018ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.838412298Z level=info msg="Executing migration" id="add index alert state"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.839824418Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.4143ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.84228753Z level=info msg="Executing migration" id="add index alert dashboard_id"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.843143676Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=855.056µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.844744782Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.84535635Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=611.798µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.847139842Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.848004157Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=862.135µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.857681361Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.858557776Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=878.775µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.86073062Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.868425704Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=7.691394ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.87135028Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.872002899Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=650.439µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.873995887Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.874804921Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=808.784µs
Dec 15 10:39:35 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 10.d deep-scrub starts
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.898710539Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.899264956Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=556.896µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.903905791Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.905318913Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=1.413291ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.909161024Z level=info msg="Executing migration" id="create alert_notification table v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.910130453Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=970.249µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.912855842Z level=info msg="Executing migration" id="Add column is_default"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.916489429Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.635687ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.920150005Z level=info msg="Executing migration" id="Add column frequency"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.924796062Z level=info msg="Migration successfully executed" id="Add column frequency" duration=4.646017ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.926832611Z level=info msg="Executing migration" id="Add column send_reminder"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.930313643Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.481052ms
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.932093134Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.936086702Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.993067ms
Dec 15 10:39:35 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 10.d deep-scrub ok
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.939497331Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.940939583Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.442242ms
Dec 15 10:39:35 compute-0 systemd[1]: Reloading.
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.992401927Z level=info msg="Executing migration" id="Update alert table charset"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.992470879Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=101.863µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.99490348Z level=info msg="Executing migration" id="Update alert_notification table charset"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.994925121Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=23.091µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.997221888Z level=info msg="Executing migration" id="create notification_journal table v1"
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.997924859Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=702.74µs
Dec 15 10:39:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:35.99935984Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.000174144Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=813.864µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.005340885Z level=info msg="Executing migration" id="drop alert_notification_journal"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.006309974Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=966.848µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.008067164Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.008960631Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=893.677µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.010825255Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.011651099Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=825.494µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.013970657Z level=info msg="Executing migration" id="Add for to alert table"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.016863042Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=2.894595ms
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.01886887Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.021630061Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=2.75813ms
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.023881946Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.024071242Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=191.176µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.025914356Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.026669958Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=755.402µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.028484551Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.029464139Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=979.578µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.030974714Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.034300161Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.324297ms
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.040591764Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.040656416Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=63.712µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.042272484Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.042957714Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=683.25µs
Dec 15 10:39:36 compute-0 systemd-rc-local-generator[96798]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.045319233Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.046044134Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=724.761µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.047712153Z level=info msg="Executing migration" id="Drop old annotation table v4"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.047782895Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=71.112µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.050738762Z level=info msg="Executing migration" id="create annotation table v5"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.051452793Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=714.012µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.053277936Z level=info msg="Executing migration" id="add index annotation 0 v3"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.053907464Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=627.628µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.055917523Z level=info msg="Executing migration" id="add index annotation 1 v3"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.056582832Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=665.199µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.05822207Z level=info msg="Executing migration" id="add index annotation 2 v3"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.058853518Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=631.108µs
Dec 15 10:39:36 compute-0 systemd-sysv-generator[96801]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.062946698Z level=info msg="Executing migration" id="add index annotation 3 v3"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.063665249Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=717.671µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.066692407Z level=info msg="Executing migration" id="add index annotation 4 v3"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.067707357Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.01711ms
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.069942253Z level=info msg="Executing migration" id="Update annotation table charset"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.069966033Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=24.711µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.07156558Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.074875016Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=3.308187ms
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.076426382Z level=info msg="Executing migration" id="Drop category_id index"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.077161533Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=735.781µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.079495601Z level=info msg="Executing migration" id="Add column tags to annotation table"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.082318844Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=2.825163ms
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.084007253Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.084585721Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=578.058µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.087914387Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.088602987Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=688.59µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.090175404Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.091060389Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=882.795µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.092919274Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.102262406Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=9.342442ms
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.103781661Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.104375649Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=593.908µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.110146688Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.11094967Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=802.082µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.112426394Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.112765984Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=339.57µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.114303678Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.114918067Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=614.249µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.118000747Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.118268695Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=268.618µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.122289502Z level=info msg="Executing migration" id="Add created time to annotation table"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.125780564Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=3.490782ms
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.127344539Z level=info msg="Executing migration" id="Add updated time to annotation table"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.130487951Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.141932ms
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.132076318Z level=info msg="Executing migration" id="Add index for created in annotation table"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.13285103Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=774.402µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.134322904Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.135019434Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=696.39µs
Dec 15 10:39:36 compute-0 ceph-mgr[74651]: [progress INFO root] Writing back 26 completed events
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.141589576Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.141778271Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=188.966µs
Dec 15 10:39:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.14722474Z level=info msg="Executing migration" id="Add epoch_end column"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.150365632Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=3.138502ms
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.151839846Z level=info msg="Executing migration" id="Add index for epoch_end"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.152577917Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=737.801µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.154878045Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.155017839Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=140.034µs
Dec 15 10:39:36 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:36 compute-0 ceph-mgr[74651]: [progress WARNING root] Starting Global Recovery Event,8 pgs not in active + clean state
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.164453264Z level=info msg="Executing migration" id="Move region to single row"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.164870446Z level=info msg="Migration successfully executed" id="Move region to single row" duration=420.342µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.16638351Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.167349998Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=964.838µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.169283435Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.17013244Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=848.635µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.171795419Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.172685094Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=888.935µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.173947602Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.174810406Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=862.554µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.177097974Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.177953618Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=854.564µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.179607067Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.180311587Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=704.65µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.182555013Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.182614695Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=60.312µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.184223481Z level=info msg="Executing migration" id="create test_data table"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.185023595Z level=info msg="Migration successfully executed" id="create test_data table" duration=799.154µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.188176317Z level=info msg="Executing migration" id="create dashboard_version table v1"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.18897121Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=794.663µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.190799114Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.191536005Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=734.641µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.193323168Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.19409277Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=770.192µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.196252213Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.196452799Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=200.326µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.198266842Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.198563891Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=296.689µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.200699933Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.200746674Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=47.161µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.202351682Z level=info msg="Executing migration" id="create team table"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.202914337Z level=info msg="Migration successfully executed" id="create team table" duration=562.615µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.204535885Z level=info msg="Executing migration" id="add index team.org_id"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.205287948Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=751.793µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.207364398Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.208010807Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=647.709µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.211539059Z level=info msg="Executing migration" id="Add column uid in team"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.21464864Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=3.108731ms
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.216023191Z level=info msg="Executing migration" id="Update uid column values in team"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.216160725Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=138.014µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.217948637Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.218959597Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.01095ms
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.220668516Z level=info msg="Executing migration" id="create team member table"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.221535622Z level=info msg="Migration successfully executed" id="create team member table" duration=866.526µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.223509279Z level=info msg="Executing migration" id="add index team_member.org_id"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.224486468Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=975.219µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.226443485Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.227129425Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=686.05µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.230386171Z level=info msg="Executing migration" id="add index team_member.team_id"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.23616794Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=5.781569ms
Dec 15 10:39:36 compute-0 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.vdqmne for 77365f67-614e-5a8d-b658-640395550c79...
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:36 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0001930 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.28139402Z level=info msg="Executing migration" id="Add column email to team table"
Dec 15 10:39:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.286731926Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=5.340206ms
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.289847547Z level=info msg="Executing migration" id="Add column external to team_member table"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.294915056Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=5.070029ms
Dec 15 10:39:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.311702077Z level=info msg="Executing migration" id="Add column permission to team_member table"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.315220309Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=3.518692ms
Dec 15 10:39:36 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.316776835Z level=info msg="Executing migration" id="create dashboard acl table"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.317567207Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=790.262µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.319544506Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.320255556Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=711.01µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.322298036Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.323033948Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=735.572µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.325407267Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Dec 15 10:39:36 compute-0 ceph-mon[74356]: pgmap v83: 353 pgs: 4 unknown, 4 remapped+peering, 345 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:39:36 compute-0 ceph-mon[74356]: osdmap e75: 3 total, 3 up, 3 in
Dec 15 10:39:36 compute-0 ceph-mon[74356]: 9.5 scrub starts
Dec 15 10:39:36 compute-0 ceph-mon[74356]: 9.5 scrub ok
Dec 15 10:39:36 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.32791841Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=2.508443ms
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.330523746Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.331833014Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.314968ms
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.334152593Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.336057028Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.904545ms
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.339130918Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.34025269Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.125172ms
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.341786106Z level=info msg="Executing migration" id="add index dashboard_permission"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.342628321Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=842.524µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.343976589Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.344478034Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=502.305µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.346227475Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.346418261Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=190.636µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.348211424Z level=info msg="Executing migration" id="create tag table"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.348858172Z level=info msg="Migration successfully executed" id="create tag table" duration=646.788µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.352629672Z level=info msg="Executing migration" id="add index tag.key_value"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.353332973Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=703.241µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.35461166Z level=info msg="Executing migration" id="create login attempt table"
Dec 15 10:39:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.355256699Z level=info msg="Migration successfully executed" id="create login attempt table" duration=644.389µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.356796013Z level=info msg="Executing migration" id="add index login_attempt.username"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.357558606Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=762.033µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.359121762Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.359858103Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=736.141µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.361803121Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.372506883Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=10.702392ms
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.37411999Z level=info msg="Executing migration" id="create login_attempt v2"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.374705997Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=584.257µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.376262262Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.376954833Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=692.581µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.379089565Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.379374133Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=284.108µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.380930409Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.381523427Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=593.188µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.383291918Z level=info msg="Executing migration" id="create user auth table"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.383891646Z level=info msg="Migration successfully executed" id="create user auth table" duration=599.558µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.386686638Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.387888332Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.201834ms
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.393996561Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.394114304Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=119.863µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:36 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8001fc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.402252782Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.406457575Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=4.213183ms
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.412846061Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.416842888Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=3.996927ms
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.421047481Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.425081989Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=4.035518ms
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.432526706Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.436556155Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=4.030388ms
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.443348443Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.444164697Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=820.254µs
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.525870784Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.534444935Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=8.580701ms
Dec 15 10:39:36 compute-0 podman[96852]: 2025-12-15 10:39:36.444890858 +0000 UTC m=+0.023244691 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.729232906Z level=info msg="Executing migration" id="create server_lock table"
Dec 15 10:39:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:36.730520874Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.291159ms
Dec 15 10:39:36 compute-0 podman[96852]: 2025-12-15 10:39:36.764389773 +0000 UTC m=+0.342743596 container create 882ba9047be1c674b34536901f1afd1280219eb9f94bc0913daca9d39619acc0 (image=quay.io/ceph/haproxy:2.3, name=ceph-77365f67-614e-5a8d-b658-640395550c79-haproxy-rgw-default-compute-0-vdqmne)
Dec 15 10:39:36 compute-0 sudo[96888]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftahdrkprzhkcrbuwezcvvrtqmdztfmc ; /usr/bin/python3'
Dec 15 10:39:36 compute-0 sudo[96888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:39:36 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Dec 15 10:39:36 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Dec 15 10:39:36 compute-0 python3[96890]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:39:37 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:37.035098573Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Dec 15 10:39:37 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:37.036809134Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.721301ms
Dec 15 10:39:37 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v86: 353 pgs: 4 peering, 4 remapped+peering, 345 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 18 op/s; 214 B/s, 11 objects/s recovering
Dec 15 10:39:37 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:37 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc0042e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:38 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.280815502Z level=info msg="Executing migration" id="create user auth token table"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.282029088Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.216076ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.284928193Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.285970953Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.0437ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.289171536Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.290045812Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=874.766µs
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.291929127Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Dec 15 10:39:38 compute-0 ceph-mon[74356]: 10.d deep-scrub starts
Dec 15 10:39:38 compute-0 ceph-mon[74356]: 10.d deep-scrub ok
Dec 15 10:39:38 compute-0 ceph-mon[74356]: osdmap e76: 3 total, 3 up, 3 in
Dec 15 10:39:38 compute-0 ceph-mon[74356]: 10.e scrub starts
Dec 15 10:39:38 compute-0 ceph-mon[74356]: 10.e scrub ok
Dec 15 10:39:38 compute-0 ceph-mon[74356]: 12.18 scrub starts
Dec 15 10:39:38 compute-0 ceph-mon[74356]: 12.18 scrub ok
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.292846434Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=917.417µs
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.294745509Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.299163589Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=4.41449ms
Dec 15 10:39:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41f95b69fa645e3f4f6ed508c1340b531933e5a27f2b8ab0654f0e8fee861a79/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.35807543Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.360608384Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=2.535604ms
Dec 15 10:39:38 compute-0 podman[96852]: 2025-12-15 10:39:38.362293073 +0000 UTC m=+1.940646906 container init 882ba9047be1c674b34536901f1afd1280219eb9f94bc0913daca9d39619acc0 (image=quay.io/ceph/haproxy:2.3, name=ceph-77365f67-614e-5a8d-b658-640395550c79-haproxy-rgw-default-compute-0-vdqmne)
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.363946671Z level=info msg="Executing migration" id="create cache_data table"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.365997441Z level=info msg="Migration successfully executed" id="create cache_data table" duration=2.050741ms
Dec 15 10:39:38 compute-0 podman[96852]: 2025-12-15 10:39:38.369517234 +0000 UTC m=+1.947871047 container start 882ba9047be1c674b34536901f1afd1280219eb9f94bc0913daca9d39619acc0 (image=quay.io/ceph/haproxy:2.3, name=ceph-77365f67-614e-5a8d-b658-640395550c79-haproxy-rgw-default-compute-0-vdqmne)
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.37077333Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.372035448Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.262418ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.374757167Z level=info msg="Executing migration" id="create short_url table v1"
Dec 15 10:39:38 compute-0 bash[96852]: 882ba9047be1c674b34536901f1afd1280219eb9f94bc0913daca9d39619acc0
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.377005553Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=2.246896ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.379786594Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-haproxy-rgw-default-compute-0-vdqmne[96906]: [NOTICE] 348/103938 (2) : New worker #1 (4) forked
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.382224715Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=2.40054ms
Dec 15 10:39:38 compute-0 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.vdqmne for 77365f67-614e-5a8d-b658-640395550c79.
Dec 15 10:39:38 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:39:38 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.002000059s ======
Dec 15 10:39:38 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:39:38.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000059s
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:38 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0001930 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.421550654Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.421775561Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=228.087µs
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.436552183Z level=info msg="Executing migration" id="delete alert_definition table"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.436698477Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=149.194µs
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.439363405Z level=info msg="Executing migration" id="recreate alert_definition table"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.440213559Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=851.674µs
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.442047793Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.442905238Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=857.535µs
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.445449402Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.446923416Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.476154ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.455998021Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.456129085Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=134.554µs
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.459138562Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.460796581Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.658799ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.462775048Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.464481589Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.706521ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.467014543Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.468738403Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.72348ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.471087871Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.472667578Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.579567ms
Dec 15 10:39:38 compute-0 podman[96892]: 2025-12-15 10:39:38.474838041 +0000 UTC m=+1.486466224 container create fbbfc339f7420010e7442486412a97237db280f6972c04e18e38025408e9d599 (image=quay.io/ceph/ceph:v19, name=blissful_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.475604884Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Dec 15 10:39:38 compute-0 sudo[96630]: pam_unix(sudo:session): session closed for user root
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.484111453Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=8.502258ms
Dec 15 10:39:38 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.51449393Z level=info msg="Executing migration" id="drop alert_definition table"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.515971773Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.478333ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.520380582Z level=info msg="Executing migration" id="delete alert_definition_version table"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.520499946Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=119.774µs
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.522958518Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.523914995Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=953.728µs
Dec 15 10:39:38 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.525714948Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.526649015Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=933.487µs
Dec 15 10:39:38 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.529330673Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.530273241Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=941.928µs
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.532165206Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.532255148Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=91.202µs
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.534439723Z level=info msg="Executing migration" id="drop alert_definition_version table"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.535428161Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=988.248µs
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.53845854Z level=info msg="Executing migration" id="create alert_instance table"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.539335956Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=876.096µs
Dec 15 10:39:38 compute-0 podman[96892]: 2025-12-15 10:39:38.450303145 +0000 UTC m=+1.461931428 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:39:38 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:38 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.545677801Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.546823634Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.146553ms
Dec 15 10:39:38 compute-0 systemd[1]: Started libpod-conmon-fbbfc339f7420010e7442486412a97237db280f6972c04e18e38025408e9d599.scope.
Dec 15 10:39:38 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:39:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd62f5f8df95fc4ce1f56b41167ba64d00a7ce0a6b2499a125fa1b5939c9fcb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:39:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd62f5f8df95fc4ce1f56b41167ba64d00a7ce0a6b2499a125fa1b5939c9fcb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.59012013Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.591247642Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.129062ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.593801257Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Dec 15 10:39:38 compute-0 podman[96892]: 2025-12-15 10:39:38.597430303 +0000 UTC m=+1.609058526 container init fbbfc339f7420010e7442486412a97237db280f6972c04e18e38025408e9d599 (image=quay.io/ceph/ceph:v19, name=blissful_hodgkin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.599260066Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.450479ms
Dec 15 10:39:38 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.601453711Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.602593434Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.141573ms
Dec 15 10:39:38 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.ihyull on compute-2
Dec 15 10:39:38 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.ihyull on compute-2
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.604527851Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Dec 15 10:39:38 compute-0 podman[96892]: 2025-12-15 10:39:38.604762257 +0000 UTC m=+1.616390450 container start fbbfc339f7420010e7442486412a97237db280f6972c04e18e38025408e9d599 (image=quay.io/ceph/ceph:v19, name=blissful_hodgkin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.605562801Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.03541ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.608050153Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Dec 15 10:39:38 compute-0 podman[96892]: 2025-12-15 10:39:38.60861587 +0000 UTC m=+1.620244053 container attach fbbfc339f7420010e7442486412a97237db280f6972c04e18e38025408e9d599 (image=quay.io/ceph/ceph:v19, name=blissful_hodgkin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.632129657Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=24.075554ms
Dec 15 10:39:38 compute-0 blissful_hodgkin[96924]: ERROR: invalid flag --daemon-type
Dec 15 10:39:38 compute-0 systemd[1]: libpod-fbbfc339f7420010e7442486412a97237db280f6972c04e18e38025408e9d599.scope: Deactivated successfully.
Dec 15 10:39:38 compute-0 podman[96892]: 2025-12-15 10:39:38.654621375 +0000 UTC m=+1.666249578 container died fbbfc339f7420010e7442486412a97237db280f6972c04e18e38025408e9d599 (image=quay.io/ceph/ceph:v19, name=blissful_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.67979031Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.703218564Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=23.429074ms
Dec 15 10:39:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecd62f5f8df95fc4ce1f56b41167ba64d00a7ce0a6b2499a125fa1b5939c9fcb-merged.mount: Deactivated successfully.
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.727726671Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.728682308Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=958.057µs
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.730281375Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.731035167Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=753.412µs
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.732903042Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.737088234Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=4.183931ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.74038764Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Dec 15 10:39:38 compute-0 podman[96892]: 2025-12-15 10:39:38.741123442 +0000 UTC m=+1.752751635 container remove fbbfc339f7420010e7442486412a97237db280f6972c04e18e38025408e9d599 (image=quay.io/ceph/ceph:v19, name=blissful_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.744724896Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=4.336916ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.746369035Z level=info msg="Executing migration" id="create alert_rule table"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.747175579Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=806.284µs
Dec 15 10:39:38 compute-0 systemd[1]: libpod-conmon-fbbfc339f7420010e7442486412a97237db280f6972c04e18e38025408e9d599.scope: Deactivated successfully.
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.749121016Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.74994664Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=824.925µs
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.751695371Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.752467203Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=771.422µs
Dec 15 10:39:38 compute-0 sudo[96888]: pam_unix(sudo:session): session closed for user root
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.783776108Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.786787876Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=3.013207ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.790260237Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.790487034Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=226.917µs
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.793304426Z level=info msg="Executing migration" id="add column for to alert_rule"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.799011223Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=5.704727ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.801076323Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.805111291Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.034858ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.806766469Z level=info msg="Executing migration" id="add column labels to alert_rule"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.81087229Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=4.105291ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.812329272Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.813044593Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=715.191µs
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.814795684Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.815583687Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=787.433µs
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.839745084Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.845854882Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.106717ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.847705166Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.853950418Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.243152ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.856422261Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.857601115Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.178724ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.859744957Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.86631397Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.555832ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.868431871Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.875983163Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=7.548681ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.878399383Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.878479105Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=81.292µs
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.880298348Z level=info msg="Executing migration" id="create alert_rule_version table"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.88137709Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.078862ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.885396977Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.886348095Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=950.638µs
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.887830178Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.888777356Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=946.758µs
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.890754373Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.890808905Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=54.882µs
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.89265876Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.897700687Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=5.041587ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.945647508Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.950924052Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=5.276204ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.972987717Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.979816777Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.82684ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.982561987Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.988292873Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=5.721336ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.991446186Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.997286436Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=5.840651ms
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.999637545Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Dec 15 10:39:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:38.999738628Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=102.323µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.00216594Z level=info msg="Executing migration" id=create_alert_configuration_table
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.003461497Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.296158ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.006225798Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.013760957Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=7.53182ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.016323802Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.016409895Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=84.083µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.018857697Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.027464018Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=8.59951ms
Dec 15 10:39:39 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v87: 353 pgs: 4 peering, 349 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 10 op/s; 227 B/s, 12 objects/s recovering
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.088584704Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.091492259Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=2.909355ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.093304202Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.09906448Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=5.757258ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.101417709Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.102315016Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=898.177µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.104347565Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.105229531Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=882.536µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.107297201Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.111972327Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=4.674536ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.114133401Z level=info msg="Executing migration" id="create provenance_type table"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.11480936Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=676.469µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.116929913Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.117719235Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=788.992µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.119456296Z level=info msg="Executing migration" id="create alert_image table"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.120127535Z level=info msg="Migration successfully executed" id="create alert_image table" duration=671.069µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.121789355Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.122570668Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=781.714µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.124321758Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.12436909Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=49.752µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.126211643Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.126972326Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=760.643µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.128683265Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.12949602Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=812.454µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.13122531Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.131497668Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.133336402Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.133687692Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=351.21µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.1353418Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.136080022Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=738.062µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.137764671Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.142599232Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=4.833501ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.191293465Z level=info msg="Executing migration" id="create library_element table v1"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.192501311Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.210756ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.196127376Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.197013262Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=885.946µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.19897966Z level=info msg="Executing migration" id="create library_element_connection table v1"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.19966246Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=682.601µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.201673258Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.202473272Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=799.924µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.204225503Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.204954324Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=728.661µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.206672824Z level=info msg="Executing migration" id="increase max description length to 2048"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.206690415Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=18.101µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.208933551Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.208979322Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=45.961µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.210792625Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.211350121Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=558.196µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.213597037Z level=info msg="Executing migration" id="create data_keys table"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.215952596Z level=info msg="Migration successfully executed" id="create data_keys table" duration=2.355418ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.218025286Z level=info msg="Executing migration" id="create secrets table"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.218944163Z level=info msg="Migration successfully executed" id="create secrets table" duration=917.807µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.220813407Z level=info msg="Executing migration" id="rename data_keys name column to id"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:39 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8001fc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.25785585Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=37.026312ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.267589485Z level=info msg="Executing migration" id="add name column into data_keys"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.274602889Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=7.013424ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.2766791Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.276845885Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=166.985µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.279131612Z level=info msg="Executing migration" id="rename data_keys name column to label"
Dec 15 10:39:39 compute-0 ceph-mon[74356]: 10.5 scrub starts
Dec 15 10:39:39 compute-0 ceph-mon[74356]: 10.5 scrub ok
Dec 15 10:39:39 compute-0 ceph-mon[74356]: pgmap v86: 353 pgs: 4 peering, 4 remapped+peering, 345 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 18 op/s; 214 B/s, 11 objects/s recovering
Dec 15 10:39:39 compute-0 ceph-mon[74356]: 12.0 scrub starts
Dec 15 10:39:39 compute-0 ceph-mon[74356]: 12.0 scrub ok
Dec 15 10:39:39 compute-0 ceph-mon[74356]: 12.17 scrub starts
Dec 15 10:39:39 compute-0 ceph-mon[74356]: 12.17 scrub ok
Dec 15 10:39:39 compute-0 ceph-mon[74356]: 12.14 scrub starts
Dec 15 10:39:39 compute-0 ceph-mon[74356]: 12.14 scrub ok
Dec 15 10:39:39 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:39 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:39 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:39 compute-0 ceph-mon[74356]: Deploying daemon haproxy.rgw.default.compute-2.ihyull on compute-2
Dec 15 10:39:39 compute-0 ceph-mon[74356]: 8.1c scrub starts
Dec 15 10:39:39 compute-0 ceph-mon[74356]: 8.1c scrub ok
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.329123483Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=49.97884ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.337668122Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.389067644Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=51.399001ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.431650478Z level=info msg="Executing migration" id="create kv_store table v1"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.433513192Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.865994ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.438165659Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.439450096Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.284007ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.441850026Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.44234136Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=491.684µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.447941794Z level=info msg="Executing migration" id="create permission table"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.44982926Z level=info msg="Migration successfully executed" id="create permission table" duration=1.887656ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.452413205Z level=info msg="Executing migration" id="add unique index permission.role_id"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.454351261Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.938316ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.456850184Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.460061328Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=3.210614ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.462664644Z level=info msg="Executing migration" id="create role table"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.464461807Z level=info msg="Migration successfully executed" id="create role table" duration=1.796362ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.467495765Z level=info msg="Executing migration" id="add column display_name"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.480515726Z level=info msg="Migration successfully executed" id="add column display_name" duration=13.021811ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.505579559Z level=info msg="Executing migration" id="add column group_name"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.519937098Z level=info msg="Migration successfully executed" id="add column group_name" duration=14.356959ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.522282457Z level=info msg="Executing migration" id="add index role.org_id"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.524910343Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=2.627306ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.528851018Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.531188137Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=2.337329ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.536731929Z level=info msg="Executing migration" id="add index role_org_id_uid"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.538260893Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.460472ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.541337173Z level=info msg="Executing migration" id="create team role table"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.542914599Z level=info msg="Migration successfully executed" id="create team role table" duration=1.579016ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.545770082Z level=info msg="Executing migration" id="add index team_role.org_id"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.547456942Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.69289ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.550160181Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.551948333Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.787742ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.554361753Z level=info msg="Executing migration" id="add index team_role.team_id"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.555796776Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.434913ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.558134044Z level=info msg="Executing migration" id="create user role table"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.559348509Z level=info msg="Migration successfully executed" id="create user role table" duration=1.214455ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.561350538Z level=info msg="Executing migration" id="add index user_role.org_id"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.562558643Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.207775ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.564816Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.565960973Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.143922ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.568487976Z level=info msg="Executing migration" id="add index user_role.user_id"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.569702592Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.214736ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.571930677Z level=info msg="Executing migration" id="create builtin role table"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.573118262Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.186835ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.575610615Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.577447638Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.834813ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.581445905Z level=info msg="Executing migration" id="add index builtin_role.name"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.585985728Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=4.535523ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.592272412Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.601240814Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.967272ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.658309651Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.660212306Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.879514ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.685508376Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.687396321Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.890966ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.691925323Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.693069547Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.146964ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.701152243Z level=info msg="Executing migration" id="add unique index role.uid"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.702541623Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.39287ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.704413308Z level=info msg="Executing migration" id="create seed assignment table"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.705151539Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=738.431µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.706977933Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.708422885Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.444652ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.712999029Z level=info msg="Executing migration" id="add column hidden to role table"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.719684624Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=6.684525ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.721824396Z level=info msg="Executing migration" id="permission kind migration"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.728732529Z level=info msg="Migration successfully executed" id="permission kind migration" duration=6.904653ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.733142707Z level=info msg="Executing migration" id="permission attribute migration"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.74181113Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.667083ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.74418557Z level=info msg="Executing migration" id="permission identifier migration"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.750147454Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.960374ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.753065719Z level=info msg="Executing migration" id="add permission identifier index"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.754053359Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=987.4µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.757143189Z level=info msg="Executing migration" id="add permission action scope role_id index"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.758181819Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.03889ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.763435963Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.764409831Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=974.668µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.768252504Z level=info msg="Executing migration" id="create query_history table v1"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.769312724Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.060641ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.772470606Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.773544898Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.075252ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.778097001Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.778181513Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=84.582µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.781050507Z level=info msg="Executing migration" id="rbac disabled migrator"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.781082378Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=32.951µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.783773057Z level=info msg="Executing migration" id="teams permissions migration"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.784247401Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=474.664µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.788877327Z level=info msg="Executing migration" id="dashboard permissions"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.789450112Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=573.236µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.7914089Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.792018537Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=610.057µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.794602373Z level=info msg="Executing migration" id="drop managed folder create actions"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.79485128Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=249.477µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.797335813Z level=info msg="Executing migration" id="alerting notification permissions"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.797796376Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=459.173µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.80031903Z level=info msg="Executing migration" id="create query_history_star table v1"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.80100555Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=686.57µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.812012042Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.813864667Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.852464ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.81637989Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.827440653Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=11.059123ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.830235645Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.830302607Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=68.202µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.832328526Z level=info msg="Executing migration" id="create correlation table v1"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.83693091Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=4.603935ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.849166928Z level=info msg="Executing migration" id="add index correlations.uid"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.851052652Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.889885ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.853388651Z level=info msg="Executing migration" id="add index correlations.source_uid"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.854817303Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.428803ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.8649972Z level=info msg="Executing migration" id="add correlation config column"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.871747338Z level=info msg="Migration successfully executed" id="add correlation config column" duration=6.754477ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.873872269Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.875424425Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.552736ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.878551916Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.879591446Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.0403ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.881680037Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.899293832Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=17.606645ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.908742399Z level=info msg="Executing migration" id="create correlation v2"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.910011815Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.269976ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.912338223Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.913335793Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=997.929µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.915280969Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.916450164Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.169615ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.918319238Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.919257216Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=937.338µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.924246721Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.924616142Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=370.621µs
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.926817246Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.928479145Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.668289ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.953529187Z level=info msg="Executing migration" id="add provisioning column"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.961479239Z level=info msg="Migration successfully executed" id="add provisioning column" duration=7.963372ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.963871589Z level=info msg="Executing migration" id="create entity_events table"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.965733703Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.865744ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.967901727Z level=info msg="Executing migration" id="create dashboard public config v1"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.969268467Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.36637ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.97311901Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.973651826Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.976437187Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.97689812Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.979439714Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Dec 15 10:39:39 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.981306789Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.870326ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.984706808Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.986098159Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.39286ms
Dec 15 10:39:39 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.994869455Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.996176143Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.311238ms
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.998760759Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Dec 15 10:39:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:39.999710076Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=948.857µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.001282443Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.002091896Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=812.083µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.004333632Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.005152105Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=818.173µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.006765913Z level=info msg="Executing migration" id="Drop public config table"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.007554955Z level=info msg="Migration successfully executed" id="Drop public config table" duration=788.612µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.016251389Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.017421174Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.173225ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.0227574Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.023706778Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=949.618µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.026000925Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.027006114Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.005459ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.030903238Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.032073452Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.170734ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.036768139Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.061556003Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=24.781574ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.070391002Z level=info msg="Executing migration" id="add annotations_enabled column"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.077814928Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=7.416677ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.079747915Z level=info msg="Executing migration" id="add time_selection_enabled column"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.085842643Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.093168ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.087630246Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.087823571Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=193.606µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.08951503Z level=info msg="Executing migration" id="add share column"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.096161465Z level=info msg="Migration successfully executed" id="add share column" duration=6.643985ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.10045695Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.100675856Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=220.126µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.103272962Z level=info msg="Executing migration" id="create file table"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.104175629Z level=info msg="Migration successfully executed" id="create file table" duration=903.047µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.106145836Z level=info msg="Executing migration" id="file table idx: path natural pk"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.107021762Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=875.846µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.109047941Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.109890096Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=841.575µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.14561221Z level=info msg="Executing migration" id="create file_meta table"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.146832066Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.220036ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.149167864Z level=info msg="Executing migration" id="file table idx: path key"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.150128341Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=960.317µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.151854912Z level=info msg="Executing migration" id="set path collation in file table"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.151919724Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=61.602µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.153784618Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.15384322Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=59.352µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.155878339Z level=info msg="Executing migration" id="managed permissions migration"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.156370444Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=492.025µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.158307031Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.158504176Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=197.795µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.160890516Z level=info msg="Executing migration" id="RBAC action name migrator"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.162351269Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.461143ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.164507921Z level=info msg="Executing migration" id="Add UID column to playlist"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.172546776Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=8.032115ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.176306036Z level=info msg="Executing migration" id="Update uid column values in playlist"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.176503412Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=198.566µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.178932793Z level=info msg="Executing migration" id="Add index for uid in playlist"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.180732816Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.804263ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.182682403Z level=info msg="Executing migration" id="update group index for alert rules"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.183099865Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=415.052µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.185261078Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.185479645Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=218.946µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.187126462Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.187594686Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=466.664µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.189299386Z level=info msg="Executing migration" id="add action column to seed_assignment"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.196702342Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=7.394867ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.199321339Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.206159618Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=6.833669ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.208523188Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.209952609Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.432661ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.214709958Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:40 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc0042e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.293895132Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=79.183854ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.296330523Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.297347553Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.01745ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.314484754Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.315419621Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=935.117µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.317920475Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.339524126Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=21.60056ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.352672469Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.360997993Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=8.320633ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.362826386Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.363110044Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=284.798µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.364799504Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.364962909Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=165.925µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.366616107Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.366798082Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=182.105µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.368839582Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.369016277Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=176.735µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.37115523Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.371377706Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=222.306µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.373859599Z level=info msg="Executing migration" id="create folder table"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.374750754Z level=info msg="Migration successfully executed" id="create folder table" duration=890.605µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.376744453Z level=info msg="Executing migration" id="Add index for parent_uid"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.377766053Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.02113ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.380227914Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.381358908Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.130774ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.38316149Z level=info msg="Executing migration" id="Update folder title length"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.383185391Z level=info msg="Migration successfully executed" id="Update folder title length" duration=24.661µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.401243789Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Dec 15 10:39:40 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:39:40 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:39:40 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:39:40.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.403081092Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.838643ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:40 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc0042e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:40 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:39:40 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:39:40 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:39:40.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.431909155Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.433850322Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.942256ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.436646953Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.438710493Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=2.06346ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.441914967Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.44235061Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=435.813µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.444580275Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.444811132Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=230.767µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.446916493Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.448299424Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.385901ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.450341483Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.451660902Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.319919ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.453956039Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Dec 15 10:39:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.455392371Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.434111ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.457919325Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.459711147Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.791872ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.461684515Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.463121627Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.436862ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.467134634Z level=info msg="Executing migration" id="create anon_device table"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.468314469Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.180065ms
Dec 15 10:39:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.470653507Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Dec 15 10:39:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.47213097Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.478173ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.475022924Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.476647732Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.628258ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.479744853Z level=info msg="Executing migration" id="create signing_key table"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.480891436Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.146853ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.485702367Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.487288493Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.587106ms
Dec 15 10:39:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.489772595Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Dec 15 10:39:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.490810156Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.038961ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.546306378Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.546890115Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=587.607µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.55014471Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Dec 15 10:39:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.568364462Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=18.210972ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.573838302Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.575551352Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.71522ms
Dec 15 10:39:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:40 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 15 10:39:40 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 15 10:39:40 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 15 10:39:40 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 15 10:39:40 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.zotndm on compute-0
Dec 15 10:39:40 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.zotndm on compute-0
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.579760525Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.581817285Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=2.05642ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.584029559Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.586479662Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=2.449562ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.589477579Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.591526499Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=2.05069ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.606724202Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.60800962Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.287918ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.612066218Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.61415227Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=2.094012ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.616525329Z level=info msg="Executing migration" id="create sso_setting table"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.617757745Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.232366ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.620089654Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.62101851Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=930.226µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.623238615Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.623584935Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=347.92µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.625503982Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.625570924Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=67.692µs
Dec 15 10:39:40 compute-0 sudo[96958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:39:40 compute-0 sudo[96958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:39:40 compute-0 sudo[96958]: pam_unix(sudo:session): session closed for user root
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.65487715Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.670450495Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=15.573096ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.672910037Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.6819126Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.002203ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.683768384Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.684160356Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=392.452µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=migrator t=2025-12-15T10:39:40.686572586Z level=info msg="migrations completed" performed=547 skipped=0 duration=5.778024728s
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=sqlstore t=2025-12-15T10:39:40.68911785Z level=info msg="Created default organization"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=secrets t=2025-12-15T10:39:40.692112638Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Dec 15 10:39:40 compute-0 sudo[96983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:39:40 compute-0 sudo[96983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=plugin.store t=2025-12-15T10:39:40.723074273Z level=info msg="Loading plugins..."
Dec 15 10:39:40 compute-0 ceph-mon[74356]: pgmap v87: 353 pgs: 4 peering, 349 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 10 op/s; 227 B/s, 12 objects/s recovering
Dec 15 10:39:40 compute-0 ceph-mon[74356]: 12.16 scrub starts
Dec 15 10:39:40 compute-0 ceph-mon[74356]: 12.16 scrub ok
Dec 15 10:39:40 compute-0 ceph-mon[74356]: 9.1d scrub starts
Dec 15 10:39:40 compute-0 ceph-mon[74356]: 9.1d scrub ok
Dec 15 10:39:40 compute-0 ceph-mon[74356]: 9.4 scrub starts
Dec 15 10:39:40 compute-0 ceph-mon[74356]: 9.4 scrub ok
Dec 15 10:39:40 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:40 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:40 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:40 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=local.finder t=2025-12-15T10:39:40.806335555Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=plugin.store t=2025-12-15T10:39:40.806376996Z level=info msg="Plugins loaded" count=55 duration=83.303303ms
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=query_data t=2025-12-15T10:39:40.809691383Z level=info msg="Query Service initialization"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=live.push_http t=2025-12-15T10:39:40.813288089Z level=info msg="Live Push Gateway initialization"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=ngalert.migration t=2025-12-15T10:39:40.880853412Z level=info msg=Starting
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=ngalert.migration t=2025-12-15T10:39:40.883047007Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=ngalert.migration orgID=1 t=2025-12-15T10:39:40.884146138Z level=info msg="Migrating alerts for organisation"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=ngalert.migration orgID=1 t=2025-12-15T10:39:40.885706975Z level=info msg="Alerts found to migrate" alerts=0
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=ngalert.migration t=2025-12-15T10:39:40.889216236Z level=info msg="Completed alerting migration"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=ngalert.state.manager t=2025-12-15T10:39:40.927385632Z level=info msg="Running in alternative execution of Error/NoData mode"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=infra.usagestats.collector t=2025-12-15T10:39:40.929528485Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=provisioning.datasources t=2025-12-15T10:39:40.930534294Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=provisioning.alerting t=2025-12-15T10:39:40.942435751Z level=info msg="starting to provision alerting"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=provisioning.alerting t=2025-12-15T10:39:40.942550525Z level=info msg="finished to provision alerting"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=grafanaStorageLogger t=2025-12-15T10:39:40.94274092Z level=info msg="Storage starting"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=ngalert.state.manager t=2025-12-15T10:39:40.942937306Z level=info msg="Warming state cache for startup"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=ngalert.state.manager t=2025-12-15T10:39:40.94341175Z level=info msg="State cache has been initialized" states=0 duration=472.894µs
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=ngalert.multiorg.alertmanager t=2025-12-15T10:39:40.943816312Z level=info msg="Starting MultiOrg Alertmanager"
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=ngalert.scheduler t=2025-12-15T10:39:40.943840343Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=ticker t=2025-12-15T10:39:40.943882984Z level=info msg=starting first_tick=2025-12-15T10:39:50Z
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=http.server t=2025-12-15T10:39:40.94546774Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=http.server t=2025-12-15T10:39:40.945781169Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Dec 15 10:39:40 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Dec 15 10:39:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=provisioning.dashboard t=2025-12-15T10:39:40.99850557Z level=info msg="starting to provision dashboards"
Dec 15 10:39:41 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Dec 15 10:39:41 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=plugins.update.checker t=2025-12-15T10:39:41.01970586Z level=info msg="Update check succeeded" duration=76.667571ms
Dec 15 10:39:41 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=grafana.update.checker t=2025-12-15T10:39:41.027106666Z level=info msg="Update check succeeded" duration=82.802949ms
Dec 15 10:39:41 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v88: 353 pgs: 4 peering, 349 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 0 B/s wr, 9 op/s; 195 B/s, 10 objects/s recovering
Dec 15 10:39:41 compute-0 podman[97056]: 2025-12-15 10:39:41.073067919 +0000 UTC m=+0.040682640 container create 639bcdcfbe18b648bb9f6a8e78c0c199c11590f6e77e7cb3f39cd73d97e4705f (image=quay.io/ceph/keepalived:2.2.4, name=jovial_brattain, architecture=x86_64, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, name=keepalived, release=1793, distribution-scope=public, vcs-type=git)
Dec 15 10:39:41 compute-0 systemd[1]: Started libpod-conmon-639bcdcfbe18b648bb9f6a8e78c0c199c11590f6e77e7cb3f39cd73d97e4705f.scope.
Dec 15 10:39:41 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=grafana-apiserver t=2025-12-15T10:39:41.131102705Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Dec 15 10:39:41 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=grafana-apiserver t=2025-12-15T10:39:41.133723171Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Dec 15 10:39:41 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:39:41 compute-0 podman[97056]: 2025-12-15 10:39:41.057651938 +0000 UTC m=+0.025266689 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec 15 10:39:41 compute-0 podman[97056]: 2025-12-15 10:39:41.245053994 +0000 UTC m=+0.212668825 container init 639bcdcfbe18b648bb9f6a8e78c0c199c11590f6e77e7cb3f39cd73d97e4705f (image=quay.io/ceph/keepalived:2.2.4, name=jovial_brattain, release=1793, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=keepalived, description=keepalived for Ceph)
Dec 15 10:39:41 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:41 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0002da0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:41 compute-0 podman[97056]: 2025-12-15 10:39:41.255092567 +0000 UTC m=+0.222707288 container start 639bcdcfbe18b648bb9f6a8e78c0c199c11590f6e77e7cb3f39cd73d97e4705f (image=quay.io/ceph/keepalived:2.2.4, name=jovial_brattain, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, release=1793, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Dec 15 10:39:41 compute-0 jovial_brattain[97072]: 0 0
Dec 15 10:39:41 compute-0 systemd[1]: libpod-639bcdcfbe18b648bb9f6a8e78c0c199c11590f6e77e7cb3f39cd73d97e4705f.scope: Deactivated successfully.
Dec 15 10:39:41 compute-0 podman[97056]: 2025-12-15 10:39:41.267077247 +0000 UTC m=+0.234691968 container attach 639bcdcfbe18b648bb9f6a8e78c0c199c11590f6e77e7cb3f39cd73d97e4705f (image=quay.io/ceph/keepalived:2.2.4, name=jovial_brattain, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, vcs-type=git, version=2.2.4, io.openshift.tags=Ceph keepalived, release=1793, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Dec 15 10:39:41 compute-0 podman[97056]: 2025-12-15 10:39:41.267665954 +0000 UTC m=+0.235280675 container died 639bcdcfbe18b648bb9f6a8e78c0c199c11590f6e77e7cb3f39cd73d97e4705f (image=quay.io/ceph/keepalived:2.2.4, name=jovial_brattain, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.28.2, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, io.openshift.tags=Ceph keepalived, distribution-scope=public)
Dec 15 10:39:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e54a065a529717414da6444d0447d5b05677b757a8aaefa5d53fa0aeef85192-merged.mount: Deactivated successfully.
Dec 15 10:39:41 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:39:41 compute-0 podman[97056]: 2025-12-15 10:39:41.413743503 +0000 UTC m=+0.381358264 container remove 639bcdcfbe18b648bb9f6a8e78c0c199c11590f6e77e7cb3f39cd73d97e4705f (image=quay.io/ceph/keepalived:2.2.4, name=jovial_brattain, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, version=2.2.4, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, description=keepalived for Ceph, distribution-scope=public, release=1793, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 15 10:39:41 compute-0 systemd[1]: libpod-conmon-639bcdcfbe18b648bb9f6a8e78c0c199c11590f6e77e7cb3f39cd73d97e4705f.scope: Deactivated successfully.
Dec 15 10:39:41 compute-0 systemd[1]: Reloading.
Dec 15 10:39:41 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=provisioning.dashboard t=2025-12-15T10:39:41.575318114Z level=info msg="finished to provision dashboards"
Dec 15 10:39:41 compute-0 systemd-sysv-generator[97127]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:39:41 compute-0 systemd-rc-local-generator[97123]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:39:41 compute-0 ceph-mon[74356]: 12.1 scrub starts
Dec 15 10:39:41 compute-0 ceph-mon[74356]: 12.1 scrub ok
Dec 15 10:39:41 compute-0 ceph-mon[74356]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 15 10:39:41 compute-0 ceph-mon[74356]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 15 10:39:41 compute-0 ceph-mon[74356]: Deploying daemon keepalived.rgw.default.compute-0.zotndm on compute-0
Dec 15 10:39:41 compute-0 ceph-mon[74356]: 8.f scrub starts
Dec 15 10:39:41 compute-0 ceph-mon[74356]: 8.f scrub ok
Dec 15 10:39:41 compute-0 ceph-mon[74356]: 11.18 scrub starts
Dec 15 10:39:41 compute-0 ceph-mon[74356]: 11.18 scrub ok
Dec 15 10:39:41 compute-0 systemd[1]: Reloading.
Dec 15 10:39:41 compute-0 systemd-rc-local-generator[97166]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:39:41 compute-0 systemd-sysv-generator[97171]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:39:42 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Dec 15 10:39:42 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Dec 15 10:39:42 compute-0 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.zotndm for 77365f67-614e-5a8d-b658-640395550c79...
Dec 15 10:39:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:42 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8001fc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:42 compute-0 podman[97222]: 2025-12-15 10:39:42.38870143 +0000 UTC m=+0.058279893 container create 9df1f24a3773051b2c8597389c8710d6fc28225e8140952fb679e1a56e88c304 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-rgw-default-compute-0-zotndm, distribution-scope=public, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., name=keepalived, version=2.2.4, build-date=2023-02-22T09:23:20, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 15 10:39:42 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:39:42 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:39:42 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:39:42.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:39:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:42 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:42 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:39:42 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:39:42 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:39:42.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:39:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f58807538a1cc46ca7a5085f4551ddbbcff76a00acc127e48fdd474b56fcace5/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:39:42 compute-0 podman[97222]: 2025-12-15 10:39:42.445100528 +0000 UTC m=+0.114678991 container init 9df1f24a3773051b2c8597389c8710d6fc28225e8140952fb679e1a56e88c304 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-rgw-default-compute-0-zotndm, distribution-scope=public, name=keepalived, release=1793, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.openshift.expose-services=, version=2.2.4, io.buildah.version=1.28.2, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container)
Dec 15 10:39:42 compute-0 podman[97222]: 2025-12-15 10:39:42.4499426 +0000 UTC m=+0.119521063 container start 9df1f24a3773051b2c8597389c8710d6fc28225e8140952fb679e1a56e88c304 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-rgw-default-compute-0-zotndm, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., name=keepalived, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, architecture=x86_64, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph)
Dec 15 10:39:42 compute-0 podman[97222]: 2025-12-15 10:39:42.366069579 +0000 UTC m=+0.035648072 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec 15 10:39:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-rgw-default-compute-0-zotndm[97237]: Mon Dec 15 10:39:42 2025: Starting Keepalived v2.2.4 (08/21,2021)
Dec 15 10:39:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-rgw-default-compute-0-zotndm[97237]: Mon Dec 15 10:39:42 2025: Running on Linux 5.14.0-648.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025 (built for Linux 5.14.0)
Dec 15 10:39:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-rgw-default-compute-0-zotndm[97237]: Mon Dec 15 10:39:42 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Dec 15 10:39:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-rgw-default-compute-0-zotndm[97237]: Mon Dec 15 10:39:42 2025: Configuration file /etc/keepalived/keepalived.conf
Dec 15 10:39:42 compute-0 bash[97222]: 9df1f24a3773051b2c8597389c8710d6fc28225e8140952fb679e1a56e88c304
Dec 15 10:39:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-rgw-default-compute-0-zotndm[97237]: Mon Dec 15 10:39:42 2025: Failed to bind to process monitoring socket - errno 98 - Address already in use
Dec 15 10:39:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-rgw-default-compute-0-zotndm[97237]: Mon Dec 15 10:39:42 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Dec 15 10:39:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-rgw-default-compute-0-zotndm[97237]: Mon Dec 15 10:39:42 2025: Starting VRRP child process, pid=4
Dec 15 10:39:42 compute-0 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.zotndm for 77365f67-614e-5a8d-b658-640395550c79.
Dec 15 10:39:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-rgw-default-compute-0-zotndm[97237]: Mon Dec 15 10:39:42 2025: Startup complete
Dec 15 10:39:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-nfs-cephfs-compute-0-gdchmd[95528]: Mon Dec 15 10:39:42 2025: (VI_0) Entering BACKUP STATE
Dec 15 10:39:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-rgw-default-compute-0-zotndm[97237]: Mon Dec 15 10:39:42 2025: (VI_0) Entering BACKUP STATE (init)
Dec 15 10:39:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-rgw-default-compute-0-zotndm[97237]: Mon Dec 15 10:39:42 2025: VRRP_Script(check_backend) succeeded
Dec 15 10:39:42 compute-0 sudo[96983]: pam_unix(sudo:session): session closed for user root
Dec 15 10:39:42 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:39:42 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:42 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:39:42 compute-0 ceph-mon[74356]: pgmap v88: 353 pgs: 4 peering, 349 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 0 B/s wr, 9 op/s; 195 B/s, 10 objects/s recovering
Dec 15 10:39:42 compute-0 ceph-mon[74356]: 11.1a deep-scrub starts
Dec 15 10:39:42 compute-0 ceph-mon[74356]: 11.1a deep-scrub ok
Dec 15 10:39:42 compute-0 ceph-mon[74356]: 12.7 scrub starts
Dec 15 10:39:42 compute-0 ceph-mon[74356]: 12.7 scrub ok
Dec 15 10:39:42 compute-0 ceph-mon[74356]: 9.1a scrub starts
Dec 15 10:39:42 compute-0 ceph-mon[74356]: 9.1a scrub ok
Dec 15 10:39:42 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:42 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec 15 10:39:42 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:42 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 15 10:39:42 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 15 10:39:42 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 15 10:39:42 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 15 10:39:42 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.vlqsys on compute-2
Dec 15 10:39:42 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.vlqsys on compute-2
Dec 15 10:39:43 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Dec 15 10:39:43 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Dec 15 10:39:43 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v89: 353 pgs: 353 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 99 B/s, 5 objects/s recovering
Dec 15 10:39:43 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Dec 15 10:39:43 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec 15 10:39:43 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-nfs-cephfs-compute-0-gdchmd[95528]: Mon Dec 15 10:39:43 2025: (VI_0) Entering MASTER STATE
Dec 15 10:39:43 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:43 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc0042e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:43 compute-0 ceph-mon[74356]: 8.19 scrub starts
Dec 15 10:39:43 compute-0 ceph-mon[74356]: 8.19 scrub ok
Dec 15 10:39:43 compute-0 ceph-mon[74356]: 8.3 deep-scrub starts
Dec 15 10:39:43 compute-0 ceph-mon[74356]: 8.3 deep-scrub ok
Dec 15 10:39:43 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:43 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:43 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:43 compute-0 ceph-mon[74356]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 15 10:39:43 compute-0 ceph-mon[74356]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 15 10:39:43 compute-0 ceph-mon[74356]: Deploying daemon keepalived.rgw.default.compute-2.vlqsys on compute-2
Dec 15 10:39:43 compute-0 ceph-mon[74356]: 9.1b scrub starts
Dec 15 10:39:43 compute-0 ceph-mon[74356]: 9.1b scrub ok
Dec 15 10:39:43 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec 15 10:39:43 compute-0 ceph-mon[74356]: 11.13 scrub starts
Dec 15 10:39:43 compute-0 ceph-mon[74356]: 11.13 scrub ok
Dec 15 10:39:43 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Dec 15 10:39:43 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 15 10:39:43 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Dec 15 10:39:43 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Dec 15 10:39:44 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Dec 15 10:39:44 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Dec 15 10:39:44 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:44 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0002da0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:44 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:39:44 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:39:44 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:39:44.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:39:44 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:44 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c80032f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:44 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:39:44 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:39:44 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:39:44.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:39:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:39:44 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:39:44 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec 15 10:39:44 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:44 compute-0 ceph-mgr[74651]: [progress INFO root] complete: finished ev 323c44e5-c5b0-419d-b193-76474c7238a9 (Updating ingress.rgw.default deployment (+4 -> 4))
Dec 15 10:39:44 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event 323c44e5-c5b0-419d-b193-76474c7238a9 (Updating ingress.rgw.default deployment (+4 -> 4)) in 10 seconds
Dec 15 10:39:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec 15 10:39:44 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:44 compute-0 ceph-mgr[74651]: [progress INFO root] update: starting ev 53617e54-b36d-4b76-9081-ec81e7f1af88 (Updating prometheus deployment (+1 -> 1))
Dec 15 10:39:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Dec 15 10:39:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Dec 15 10:39:44 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Dec 15 10:39:45 compute-0 ceph-mon[74356]: pgmap v89: 353 pgs: 353 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 99 B/s, 5 objects/s recovering
Dec 15 10:39:45 compute-0 ceph-mon[74356]: 9.11 scrub starts
Dec 15 10:39:45 compute-0 ceph-mon[74356]: 9.11 scrub ok
Dec 15 10:39:45 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 15 10:39:45 compute-0 ceph-mon[74356]: osdmap e77: 3 total, 3 up, 3 in
Dec 15 10:39:45 compute-0 ceph-mon[74356]: 8.1a scrub starts
Dec 15 10:39:45 compute-0 ceph-mon[74356]: 8.1a scrub ok
Dec 15 10:39:45 compute-0 ceph-mon[74356]: 8.15 scrub starts
Dec 15 10:39:45 compute-0 ceph-mon[74356]: 8.15 scrub ok
Dec 15 10:39:45 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:45 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:45 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:45 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:45 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v92: 353 pgs: 353 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec 15 10:39:45 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Dec 15 10:39:45 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec 15 10:39:45 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Dec 15 10:39:45 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Dec 15 10:39:45 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Dec 15 10:39:45 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Dec 15 10:39:45 compute-0 sudo[97247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:39:45 compute-0 sudo[97247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:39:45 compute-0 sudo[97247]: pam_unix(sudo:session): session closed for user root
Dec 15 10:39:45 compute-0 sudo[97272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/prometheus:v2.51.0 --timeout 895 _orch deploy --fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:39:45 compute-0 sudo[97272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:39:45 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:45 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Dec 15 10:39:46 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-rgw-default-compute-0-zotndm[97237]: Mon Dec 15 10:39:46 2025: (VI_0) Entering MASTER STATE
Dec 15 10:39:46 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Dec 15 10:39:46 compute-0 ceph-mon[74356]: 9.12 scrub starts
Dec 15 10:39:46 compute-0 ceph-mon[74356]: 9.12 scrub ok
Dec 15 10:39:46 compute-0 ceph-mon[74356]: osdmap e78: 3 total, 3 up, 3 in
Dec 15 10:39:46 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec 15 10:39:46 compute-0 ceph-mon[74356]: 9.19 scrub starts
Dec 15 10:39:46 compute-0 ceph-mon[74356]: 9.19 scrub ok
Dec 15 10:39:46 compute-0 ceph-mon[74356]: 12.11 scrub starts
Dec 15 10:39:46 compute-0 ceph-mon[74356]: 12.11 scrub ok
Dec 15 10:39:46 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Dec 15 10:39:46 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 15 10:39:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Dec 15 10:39:46 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Dec 15 10:39:46 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 79 pg[10.18( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=79) [0] r=0 lpr=79 pi=[59,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:46 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 79 pg[10.8( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=79) [0] r=0 lpr=79 pi=[59,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:46 compute-0 ceph-mgr[74651]: [progress INFO root] Writing back 27 completed events
Dec 15 10:39:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 15 10:39:46 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:46 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event a600d6fc-567e-4825-9ef6-8dd227381eed (Global Recovery Event) in 10 seconds
Dec 15 10:39:46 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:46 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc0042e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:39:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Dec 15 10:39:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Dec 15 10:39:46 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Dec 15 10:39:46 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 80 pg[10.8( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[59,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:46 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 80 pg[10.8( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[59,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:46 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 80 pg[10.18( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[59,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:46 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 80 pg[10.18( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[59,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:46 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:39:46 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:39:46 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:39:46.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:39:46 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:46 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0002da0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:46 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:39:46 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 15 10:39:46 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:39:46.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 15 10:39:47 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v95: 353 pgs: 353 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:39:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Dec 15 10:39:47 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec 15 10:39:47 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 8.1e deep-scrub starts
Dec 15 10:39:47 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 8.1e deep-scrub ok
Dec 15 10:39:47 compute-0 ceph-mon[74356]: pgmap v92: 353 pgs: 353 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec 15 10:39:47 compute-0 ceph-mon[74356]: Deploying daemon prometheus.compute-0 on compute-0
Dec 15 10:39:47 compute-0 ceph-mon[74356]: 11.1b scrub starts
Dec 15 10:39:47 compute-0 ceph-mon[74356]: 11.1b scrub ok
Dec 15 10:39:47 compute-0 ceph-mon[74356]: 9.1e scrub starts
Dec 15 10:39:47 compute-0 ceph-mon[74356]: 9.1e scrub ok
Dec 15 10:39:47 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 15 10:39:47 compute-0 ceph-mon[74356]: osdmap e79: 3 total, 3 up, 3 in
Dec 15 10:39:47 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:47 compute-0 ceph-mon[74356]: 11.1e scrub starts
Dec 15 10:39:47 compute-0 ceph-mon[74356]: osdmap e80: 3 total, 3 up, 3 in
Dec 15 10:39:47 compute-0 ceph-mon[74356]: 11.1e scrub ok
Dec 15 10:39:47 compute-0 ceph-mon[74356]: 9.13 scrub starts
Dec 15 10:39:47 compute-0 ceph-mon[74356]: 9.13 scrub ok
Dec 15 10:39:47 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec 15 10:39:47 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:47 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c80032f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Dec 15 10:39:47 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 15 10:39:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Dec 15 10:39:47 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Dec 15 10:39:48 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Dec 15 10:39:48 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Dec 15 10:39:48 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:48 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003c30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:48 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Dec 15 10:39:48 compute-0 ceph-mon[74356]: pgmap v95: 353 pgs: 353 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:39:48 compute-0 ceph-mon[74356]: 8.1e deep-scrub starts
Dec 15 10:39:48 compute-0 ceph-mon[74356]: 8.1e deep-scrub ok
Dec 15 10:39:48 compute-0 ceph-mon[74356]: 11.1d scrub starts
Dec 15 10:39:48 compute-0 ceph-mon[74356]: 11.1d scrub ok
Dec 15 10:39:48 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 15 10:39:48 compute-0 ceph-mon[74356]: osdmap e81: 3 total, 3 up, 3 in
Dec 15 10:39:48 compute-0 ceph-mon[74356]: 12.1d scrub starts
Dec 15 10:39:48 compute-0 ceph-mon[74356]: 12.1d scrub ok
Dec 15 10:39:48 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Dec 15 10:39:48 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:39:48 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:39:48 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:39:48.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:39:48 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Dec 15 10:39:48 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:48 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc0042e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:48 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 82 pg[10.8( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=7 ec=59/47 lis/c=80/59 les/c/f=81/60/0 sis=82) [0] r=0 lpr=82 pi=[59,82)/1 luod=0'0 crt=54'1067 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:48 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 82 pg[10.8( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=7 ec=59/47 lis/c=80/59 les/c/f=81/60/0 sis=82) [0] r=0 lpr=82 pi=[59,82)/1 crt=54'1067 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:48 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 82 pg[10.18( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=5 ec=59/47 lis/c=80/59 les/c/f=81/60/0 sis=82) [0] r=0 lpr=82 pi=[59,82)/1 luod=0'0 crt=54'1067 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:48 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 82 pg[10.18( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=5 ec=59/47 lis/c=80/59 les/c/f=81/60/0 sis=82) [0] r=0 lpr=82 pi=[59,82)/1 crt=54'1067 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:48 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:39:48 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:39:48 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:39:48.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:39:48 compute-0 podman[97340]: 2025-12-15 10:39:48.751334569 +0000 UTC m=+3.133558470 volume create 9ccfb734b975441e64659970b522c994d3b560fcd10b363ba497fee809ca8da9
Dec 15 10:39:48 compute-0 podman[97340]: 2025-12-15 10:39:48.759670083 +0000 UTC m=+3.141893984 container create 4160087ae7fd73fdc1a0fd194f5cf4f6f832f3b7a136395d8eb8d4c0f721e446 (image=quay.io/prometheus/prometheus:v2.51.0, name=admiring_turing, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:48 compute-0 systemd[1]: Started libpod-conmon-4160087ae7fd73fdc1a0fd194f5cf4f6f832f3b7a136395d8eb8d4c0f721e446.scope.
Dec 15 10:39:48 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:39:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14b434de6352e26359605b1fc2e2de21abd2f7e351df4605e0744d975dae619d/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Dec 15 10:39:48 compute-0 podman[97340]: 2025-12-15 10:39:48.734291381 +0000 UTC m=+3.116515312 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Dec 15 10:39:48 compute-0 podman[97340]: 2025-12-15 10:39:48.838829806 +0000 UTC m=+3.221053727 container init 4160087ae7fd73fdc1a0fd194f5cf4f6f832f3b7a136395d8eb8d4c0f721e446 (image=quay.io/prometheus/prometheus:v2.51.0, name=admiring_turing, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:48 compute-0 podman[97340]: 2025-12-15 10:39:48.850368203 +0000 UTC m=+3.232592104 container start 4160087ae7fd73fdc1a0fd194f5cf4f6f832f3b7a136395d8eb8d4c0f721e446 (image=quay.io/prometheus/prometheus:v2.51.0, name=admiring_turing, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:48 compute-0 podman[97340]: 2025-12-15 10:39:48.854460492 +0000 UTC m=+3.236684383 container attach 4160087ae7fd73fdc1a0fd194f5cf4f6f832f3b7a136395d8eb8d4c0f721e446 (image=quay.io/prometheus/prometheus:v2.51.0, name=admiring_turing, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:48 compute-0 systemd[1]: libpod-4160087ae7fd73fdc1a0fd194f5cf4f6f832f3b7a136395d8eb8d4c0f721e446.scope: Deactivated successfully.
Dec 15 10:39:48 compute-0 admiring_turing[97597]: 65534 65534
Dec 15 10:39:48 compute-0 conmon[97597]: conmon 4160087ae7fd73fdc1a0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4160087ae7fd73fdc1a0fd194f5cf4f6f832f3b7a136395d8eb8d4c0f721e446.scope/container/memory.events
Dec 15 10:39:48 compute-0 podman[97340]: 2025-12-15 10:39:48.856348268 +0000 UTC m=+3.238572169 container died 4160087ae7fd73fdc1a0fd194f5cf4f6f832f3b7a136395d8eb8d4c0f721e446 (image=quay.io/prometheus/prometheus:v2.51.0, name=admiring_turing, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:48 compute-0 sudo[97619]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxnhkipwbxujodavtehetwexbqqvpyzi ; /usr/bin/python3'
Dec 15 10:39:48 compute-0 sudo[97619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:39:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-14b434de6352e26359605b1fc2e2de21abd2f7e351df4605e0744d975dae619d-merged.mount: Deactivated successfully.
Dec 15 10:39:48 compute-0 podman[97340]: 2025-12-15 10:39:48.916436693 +0000 UTC m=+3.298660594 container remove 4160087ae7fd73fdc1a0fd194f5cf4f6f832f3b7a136395d8eb8d4c0f721e446 (image=quay.io/prometheus/prometheus:v2.51.0, name=admiring_turing, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:48 compute-0 podman[97340]: 2025-12-15 10:39:48.920825852 +0000 UTC m=+3.303049753 volume remove 9ccfb734b975441e64659970b522c994d3b560fcd10b363ba497fee809ca8da9
Dec 15 10:39:48 compute-0 systemd[1]: libpod-conmon-4160087ae7fd73fdc1a0fd194f5cf4f6f832f3b7a136395d8eb8d4c0f721e446.scope: Deactivated successfully.
Dec 15 10:39:49 compute-0 python3[97624]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:39:49 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v98: 353 pgs: 353 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:39:49 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Dec 15 10:39:49 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec 15 10:39:49 compute-0 podman[97635]: 2025-12-15 10:39:48.979501137 +0000 UTC m=+0.027445134 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Dec 15 10:39:49 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Dec 15 10:39:49 compute-0 podman[97635]: 2025-12-15 10:39:49.083926917 +0000 UTC m=+0.131870854 volume create c6cee3786986e88991b93f9c995c8604f0ecd54024863db9237283c68caeef8c
Dec 15 10:39:49 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Dec 15 10:39:49 compute-0 podman[97635]: 2025-12-15 10:39:49.09358786 +0000 UTC m=+0.141531757 container create 542a8f4ef5f8b1331cd365b2f56b1333183eacce264ab8aedcbb676224316ece (image=quay.io/prometheus/prometheus:v2.51.0, name=hopeful_zhukovsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:49 compute-0 podman[97647]: 2025-12-15 10:39:49.125895783 +0000 UTC m=+0.087177928 container create 708fc0568a27aa0484109eb5c6eb8eb43fe2e37f726dd38e93deaf5c04a7033d (image=quay.io/ceph/ceph:v19, name=reverent_liskov, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 15 10:39:49 compute-0 systemd[1]: Started libpod-conmon-542a8f4ef5f8b1331cd365b2f56b1333183eacce264ab8aedcbb676224316ece.scope.
Dec 15 10:39:49 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:39:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d54a552e0d58875075d86ec49e14ac0948098b7a52506363f7e0b12a5b0c5ab/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Dec 15 10:39:49 compute-0 systemd[1]: Started libpod-conmon-708fc0568a27aa0484109eb5c6eb8eb43fe2e37f726dd38e93deaf5c04a7033d.scope.
Dec 15 10:39:49 compute-0 podman[97635]: 2025-12-15 10:39:49.168766566 +0000 UTC m=+0.216710483 container init 542a8f4ef5f8b1331cd365b2f56b1333183eacce264ab8aedcbb676224316ece (image=quay.io/prometheus/prometheus:v2.51.0, name=hopeful_zhukovsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:49 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:39:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7973237435ea86344e74ee8dc30c67327e0cac4d69a99a3f972d9781ee330ef6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:39:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7973237435ea86344e74ee8dc30c67327e0cac4d69a99a3f972d9781ee330ef6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:39:49 compute-0 podman[97635]: 2025-12-15 10:39:49.174445832 +0000 UTC m=+0.222389729 container start 542a8f4ef5f8b1331cd365b2f56b1333183eacce264ab8aedcbb676224316ece (image=quay.io/prometheus/prometheus:v2.51.0, name=hopeful_zhukovsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:49 compute-0 hopeful_zhukovsky[97663]: 65534 65534
Dec 15 10:39:49 compute-0 systemd[1]: libpod-542a8f4ef5f8b1331cd365b2f56b1333183eacce264ab8aedcbb676224316ece.scope: Deactivated successfully.
Dec 15 10:39:49 compute-0 podman[97647]: 2025-12-15 10:39:49.096411132 +0000 UTC m=+0.057693297 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:39:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc0042e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:49 compute-0 podman[97635]: 2025-12-15 10:39:49.273034923 +0000 UTC m=+0.320978860 container attach 542a8f4ef5f8b1331cd365b2f56b1333183eacce264ab8aedcbb676224316ece (image=quay.io/prometheus/prometheus:v2.51.0, name=hopeful_zhukovsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:49 compute-0 podman[97635]: 2025-12-15 10:39:49.27361679 +0000 UTC m=+0.321560707 container died 542a8f4ef5f8b1331cd365b2f56b1333183eacce264ab8aedcbb676224316ece (image=quay.io/prometheus/prometheus:v2.51.0, name=hopeful_zhukovsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d54a552e0d58875075d86ec49e14ac0948098b7a52506363f7e0b12a5b0c5ab-merged.mount: Deactivated successfully.
Dec 15 10:39:49 compute-0 podman[97635]: 2025-12-15 10:39:49.316561034 +0000 UTC m=+0.364504941 container remove 542a8f4ef5f8b1331cd365b2f56b1333183eacce264ab8aedcbb676224316ece (image=quay.io/prometheus/prometheus:v2.51.0, name=hopeful_zhukovsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:49 compute-0 podman[97635]: 2025-12-15 10:39:49.320444458 +0000 UTC m=+0.368388365 volume remove c6cee3786986e88991b93f9c995c8604f0ecd54024863db9237283c68caeef8c
Dec 15 10:39:49 compute-0 systemd[1]: libpod-conmon-542a8f4ef5f8b1331cd365b2f56b1333183eacce264ab8aedcbb676224316ece.scope: Deactivated successfully.
Dec 15 10:39:49 compute-0 podman[97647]: 2025-12-15 10:39:49.384321565 +0000 UTC m=+0.345603760 container init 708fc0568a27aa0484109eb5c6eb8eb43fe2e37f726dd38e93deaf5c04a7033d (image=quay.io/ceph/ceph:v19, name=reverent_liskov, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True)
Dec 15 10:39:49 compute-0 podman[97647]: 2025-12-15 10:39:49.391370741 +0000 UTC m=+0.352652896 container start 708fc0568a27aa0484109eb5c6eb8eb43fe2e37f726dd38e93deaf5c04a7033d (image=quay.io/ceph/ceph:v19, name=reverent_liskov, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 15 10:39:49 compute-0 podman[97647]: 2025-12-15 10:39:49.395767909 +0000 UTC m=+0.357050144 container attach 708fc0568a27aa0484109eb5c6eb8eb43fe2e37f726dd38e93deaf5c04a7033d (image=quay.io/ceph/ceph:v19, name=reverent_liskov, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 15 10:39:49 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Dec 15 10:39:49 compute-0 systemd[1]: Reloading.
Dec 15 10:39:49 compute-0 ceph-mon[74356]: 9.1f scrub starts
Dec 15 10:39:49 compute-0 ceph-mon[74356]: 9.1f scrub ok
Dec 15 10:39:49 compute-0 ceph-mon[74356]: 8.12 scrub starts
Dec 15 10:39:49 compute-0 ceph-mon[74356]: 8.12 scrub ok
Dec 15 10:39:49 compute-0 ceph-mon[74356]: osdmap e82: 3 total, 3 up, 3 in
Dec 15 10:39:49 compute-0 ceph-mon[74356]: 9.b scrub starts
Dec 15 10:39:49 compute-0 ceph-mon[74356]: 9.b scrub ok
Dec 15 10:39:49 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec 15 10:39:49 compute-0 ceph-mon[74356]: 9.1c scrub starts
Dec 15 10:39:49 compute-0 ceph-mon[74356]: 9.1c scrub ok
Dec 15 10:39:49 compute-0 reverent_liskov[97669]: ERROR: invalid flag --daemon-type
Dec 15 10:39:49 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 15 10:39:49 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Dec 15 10:39:49 compute-0 podman[97647]: 2025-12-15 10:39:49.447710546 +0000 UTC m=+0.408992701 container died 708fc0568a27aa0484109eb5c6eb8eb43fe2e37f726dd38e93deaf5c04a7033d (image=quay.io/ceph/ceph:v19, name=reverent_liskov, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 15 10:39:49 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Dec 15 10:39:49 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 83 pg[10.a( v 54'1067 (0'0,54'1067] local-lis/les=68/69 n=6 ec=59/47 lis/c=68/68 les/c/f=69/69/0 sis=83 pruub=13.907631874s) [1] r=-1 lpr=83 pi=[68,83)/1 crt=54'1067 mlcod 0'0 active pruub 203.042877197s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:49 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 83 pg[10.a( v 54'1067 (0'0,54'1067] local-lis/les=68/69 n=6 ec=59/47 lis/c=68/68 les/c/f=69/69/0 sis=83 pruub=13.907595634s) [1] r=-1 lpr=83 pi=[68,83)/1 crt=54'1067 mlcod 0'0 unknown NOTIFY pruub 203.042877197s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:49 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 83 pg[10.1a( v 54'1067 (0'0,54'1067] local-lis/les=68/69 n=5 ec=59/47 lis/c=68/68 les/c/f=69/69/0 sis=83 pruub=13.905646324s) [1] r=-1 lpr=83 pi=[68,83)/1 crt=54'1067 mlcod 0'0 active pruub 203.042877197s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:49 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 83 pg[10.1a( v 54'1067 (0'0,54'1067] local-lis/les=68/69 n=5 ec=59/47 lis/c=68/68 les/c/f=69/69/0 sis=83 pruub=13.905598640s) [1] r=-1 lpr=83 pi=[68,83)/1 crt=54'1067 mlcod 0'0 unknown NOTIFY pruub 203.042877197s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:49 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 83 pg[10.18( v 54'1067 (0'0,54'1067] local-lis/les=82/83 n=5 ec=59/47 lis/c=80/59 les/c/f=81/60/0 sis=82) [0] r=0 lpr=82 pi=[59,82)/1 crt=54'1067 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:49 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 83 pg[10.8( v 54'1067 (0'0,54'1067] local-lis/les=82/83 n=7 ec=59/47 lis/c=80/59 les/c/f=81/60/0 sis=82) [0] r=0 lpr=82 pi=[59,82)/1 crt=54'1067 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:39:49.471385) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795189471558, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7693, "num_deletes": 251, "total_data_size": 13648687, "memory_usage": 14018592, "flush_reason": "Manual Compaction"}
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Dec 15 10:39:49 compute-0 systemd-rc-local-generator[97745]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:39:49 compute-0 systemd-sysv-generator[97748]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795189598246, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 11550765, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 7830, "table_properties": {"data_size": 11523149, "index_size": 17389, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9157, "raw_key_size": 88160, "raw_average_key_size": 24, "raw_value_size": 11454086, "raw_average_value_size": 3148, "num_data_blocks": 770, "num_entries": 3638, "num_filter_entries": 3638, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765794891, "oldest_key_time": 1765794891, "file_creation_time": 1765795189, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d36e3d93-cef6-4482-9c71-0054ae87e0c9", "db_session_id": "8WPMBXYVT9DSSQWRN3T3", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 126899 microseconds, and 28197 cpu microseconds.
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:39:49.598295) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 11550765 bytes OK
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:39:49.598316) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:39:49.618824) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:39:49.618906) EVENT_LOG_v1 {"time_micros": 1765795189618894, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:39:49.618962) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 13613953, prev total WAL file size 13613953, number of live WAL files 2.
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:39:49.621891) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(11MB) 13(57KB) 8(1944B)]
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795189622025, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 11611189, "oldest_snapshot_seqno": -1}
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3455 keys, 11564893 bytes, temperature: kUnknown
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795189707067, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 11564893, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11537735, "index_size": 17448, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8645, "raw_key_size": 86305, "raw_average_key_size": 24, "raw_value_size": 11470255, "raw_average_value_size": 3319, "num_data_blocks": 774, "num_entries": 3455, "num_filter_entries": 3455, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765794889, "oldest_key_time": 0, "file_creation_time": 1765795189, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d36e3d93-cef6-4482-9c71-0054ae87e0c9", "db_session_id": "8WPMBXYVT9DSSQWRN3T3", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:39:49.707367) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 11564893 bytes
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:39:49.708727) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.4 rd, 135.9 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(11.1, 0.0 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3747, records dropped: 292 output_compression: NoCompression
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:39:49.708757) EVENT_LOG_v1 {"time_micros": 1765795189708736, "job": 4, "event": "compaction_finished", "compaction_time_micros": 85119, "compaction_time_cpu_micros": 23482, "output_level": 6, "num_output_files": 1, "total_output_size": 11564893, "num_input_records": 3747, "num_output_records": 3455, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795189710493, "job": 4, "event": "table_file_deletion", "file_number": 19}
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795189710547, "job": 4, "event": "table_file_deletion", "file_number": 13}
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795189710603, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec 15 10:39:49 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:39:49.621613) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 15 10:39:49 compute-0 systemd[1]: libpod-708fc0568a27aa0484109eb5c6eb8eb43fe2e37f726dd38e93deaf5c04a7033d.scope: Deactivated successfully.
Dec 15 10:39:49 compute-0 podman[97647]: 2025-12-15 10:39:49.756210521 +0000 UTC m=+0.717492706 container remove 708fc0568a27aa0484109eb5c6eb8eb43fe2e37f726dd38e93deaf5c04a7033d (image=quay.io/ceph/ceph:v19, name=reverent_liskov, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Dec 15 10:39:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-7973237435ea86344e74ee8dc30c67327e0cac4d69a99a3f972d9781ee330ef6-merged.mount: Deactivated successfully.
Dec 15 10:39:49 compute-0 systemd[1]: libpod-conmon-708fc0568a27aa0484109eb5c6eb8eb43fe2e37f726dd38e93deaf5c04a7033d.scope: Deactivated successfully.
Dec 15 10:39:49 compute-0 sudo[97619]: pam_unix(sudo:session): session closed for user root
Dec 15 10:39:49 compute-0 systemd[1]: Reloading.
Dec 15 10:39:49 compute-0 systemd-sysv-generator[97790]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 15 10:39:49 compute-0 systemd-rc-local-generator[97786]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 15 10:39:50 compute-0 systemd[1]: Starting Ceph prometheus.compute-0 for 77365f67-614e-5a8d-b658-640395550c79...
Dec 15 10:39:50 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Dec 15 10:39:50 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Dec 15 10:39:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:50 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:50 compute-0 podman[97839]: 2025-12-15 10:39:50.286925928 +0000 UTC m=+0.042101151 container create 811bf452ef3ce2cd1829f815f500a947c1c9dba459a203c1e31a5cfea42f0cfb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc526db9b4395b69211c3ed2df20f25c9bbf173d760be4067b3c34dbd28da29a/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Dec 15 10:39:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc526db9b4395b69211c3ed2df20f25c9bbf173d760be4067b3c34dbd28da29a/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Dec 15 10:39:50 compute-0 podman[97839]: 2025-12-15 10:39:50.3311444 +0000 UTC m=+0.086319633 container init 811bf452ef3ce2cd1829f815f500a947c1c9dba459a203c1e31a5cfea42f0cfb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:50 compute-0 podman[97839]: 2025-12-15 10:39:50.335654981 +0000 UTC m=+0.090830214 container start 811bf452ef3ce2cd1829f815f500a947c1c9dba459a203c1e31a5cfea42f0cfb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:39:50 compute-0 bash[97839]: 811bf452ef3ce2cd1829f815f500a947c1c9dba459a203c1e31a5cfea42f0cfb
Dec 15 10:39:50 compute-0 podman[97839]: 2025-12-15 10:39:50.267570583 +0000 UTC m=+0.022745846 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Dec 15 10:39:50 compute-0 systemd[1]: Started Ceph prometheus.compute-0 for 77365f67-614e-5a8d-b658-640395550c79.
Dec 15 10:39:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0[97855]: ts=2025-12-15T10:39:50.365Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Dec 15 10:39:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0[97855]: ts=2025-12-15T10:39:50.365Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Dec 15 10:39:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0[97855]: ts=2025-12-15T10:39:50.365Z caller=main.go:623 level=info host_details="(Linux 5.14.0-648.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025 x86_64 compute-0 (none))"
Dec 15 10:39:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0[97855]: ts=2025-12-15T10:39:50.365Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Dec 15 10:39:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0[97855]: ts=2025-12-15T10:39:50.365Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Dec 15 10:39:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0[97855]: ts=2025-12-15T10:39:50.367Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Dec 15 10:39:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0[97855]: ts=2025-12-15T10:39:50.368Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Dec 15 10:39:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0[97855]: ts=2025-12-15T10:39:50.370Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Dec 15 10:39:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0[97855]: ts=2025-12-15T10:39:50.370Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Dec 15 10:39:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0[97855]: ts=2025-12-15T10:39:50.374Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Dec 15 10:39:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0[97855]: ts=2025-12-15T10:39:50.374Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.69µs
Dec 15 10:39:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0[97855]: ts=2025-12-15T10:39:50.374Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Dec 15 10:39:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0[97855]: ts=2025-12-15T10:39:50.374Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Dec 15 10:39:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0[97855]: ts=2025-12-15T10:39:50.374Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=36.151µs wal_replay_duration=273.308µs wbl_replay_duration=160ns total_replay_duration=336.94µs
Dec 15 10:39:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0[97855]: ts=2025-12-15T10:39:50.376Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Dec 15 10:39:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0[97855]: ts=2025-12-15T10:39:50.376Z caller=main.go:1153 level=info msg="TSDB started"
Dec 15 10:39:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0[97855]: ts=2025-12-15T10:39:50.376Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Dec 15 10:39:50 compute-0 sudo[97272]: pam_unix(sudo:session): session closed for user root
Dec 15 10:39:50 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:39:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0[97855]: ts=2025-12-15T10:39:50.404Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=27.811222ms db_storage=1.14µs remote_storage=1.22µs web_handler=290ns query_engine=800ns scrape=3.78113ms scrape_sd=187.516µs notify=17.72µs notify_sd=10.611µs rules=23.25722ms tracing=11.4µs
Dec 15 10:39:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0[97855]: ts=2025-12-15T10:39:50.404Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Dec 15 10:39:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0[97855]: ts=2025-12-15T10:39:50.404Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Dec 15 10:39:50 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:50 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:39:50 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:39:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:50 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003c50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:50 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:39:50 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:39:50.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:39:50 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:50 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Dec 15 10:39:50 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:39:50 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000030s ======
Dec 15 10:39:50 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:39:50.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 15 10:39:50 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:50 compute-0 ceph-mgr[74651]: [progress INFO root] complete: finished ev 53617e54-b36d-4b76-9081-ec81e7f1af88 (Updating prometheus deployment (+1 -> 1))
Dec 15 10:39:50 compute-0 ceph-mgr[74651]: [progress INFO root] Completed event 53617e54-b36d-4b76-9081-ec81e7f1af88 (Updating prometheus deployment (+1 -> 1)) in 6 seconds
Dec 15 10:39:50 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Dec 15 10:39:50 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Dec 15 10:39:50 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Dec 15 10:39:50 compute-0 ceph-mon[74356]: pgmap v98: 353 pgs: 353 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:39:50 compute-0 ceph-mon[74356]: 11.1c scrub starts
Dec 15 10:39:50 compute-0 ceph-mon[74356]: 11.1c scrub ok
Dec 15 10:39:50 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 15 10:39:50 compute-0 ceph-mon[74356]: osdmap e83: 3 total, 3 up, 3 in
Dec 15 10:39:50 compute-0 ceph-mon[74356]: 8.1d scrub starts
Dec 15 10:39:50 compute-0 ceph-mon[74356]: 8.1d scrub ok
Dec 15 10:39:50 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:50 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:50 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:50 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Dec 15 10:39:50 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Dec 15 10:39:50 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Dec 15 10:39:50 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 84 pg[10.a( v 54'1067 (0'0,54'1067] local-lis/les=68/69 n=6 ec=59/47 lis/c=68/68 les/c/f=69/69/0 sis=84) [1]/[0] r=0 lpr=84 pi=[68,84)/1 crt=54'1067 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:50 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 84 pg[10.a( v 54'1067 (0'0,54'1067] local-lis/les=68/69 n=6 ec=59/47 lis/c=68/68 les/c/f=69/69/0 sis=84) [1]/[0] r=0 lpr=84 pi=[68,84)/1 crt=54'1067 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:50 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 84 pg[10.1a( v 54'1067 (0'0,54'1067] local-lis/les=68/69 n=5 ec=59/47 lis/c=68/68 les/c/f=69/69/0 sis=84) [1]/[0] r=0 lpr=84 pi=[68,84)/1 crt=54'1067 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:50 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 84 pg[10.1a( v 54'1067 (0'0,54'1067] local-lis/les=68/69 n=5 ec=59/47 lis/c=68/68 les/c/f=69/69/0 sis=84) [1]/[0] r=0 lpr=84 pi=[68,84)/1 crt=54'1067 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 15 10:39:51 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Dec 15 10:39:51 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Dec 15 10:39:51 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v101: 353 pgs: 353 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:39:51 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Dec 15 10:39:51 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec 15 10:39:51 compute-0 ceph-mgr[74651]: [progress INFO root] Writing back 29 completed events
Dec 15 10:39:51 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 15 10:39:51 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:51 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:51 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0003ea0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:51 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:39:51.378048) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795191378083, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 310, "num_deletes": 250, "total_data_size": 114070, "memory_usage": 121608, "flush_reason": "Manual Compaction"}
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795191382533, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 114610, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7831, "largest_seqno": 8140, "table_properties": {"data_size": 112495, "index_size": 279, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 4395, "raw_average_key_size": 15, "raw_value_size": 108332, "raw_average_value_size": 374, "num_data_blocks": 12, "num_entries": 289, "num_filter_entries": 289, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765795190, "oldest_key_time": 1765795190, "file_creation_time": 1765795191, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d36e3d93-cef6-4482-9c71-0054ae87e0c9", "db_session_id": "8WPMBXYVT9DSSQWRN3T3", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 4556 microseconds, and 1527 cpu microseconds.
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:39:51.382598) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 114610 bytes OK
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:39:51.382627) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:39:51.383983) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:39:51.384012) EVENT_LOG_v1 {"time_micros": 1765795191384003, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:39:51.384037) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 111830, prev total WAL file size 111830, number of live WAL files 2.
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:39:51.384580) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(111KB)], [20(11MB)]
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795191384631, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 11679503, "oldest_snapshot_seqno": -1}
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3230 keys, 11261187 bytes, temperature: kUnknown
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795191463392, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 11261187, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11235499, "index_size": 16490, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8133, "raw_key_size": 83452, "raw_average_key_size": 25, "raw_value_size": 11171943, "raw_average_value_size": 3458, "num_data_blocks": 715, "num_entries": 3230, "num_filter_entries": 3230, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765794889, "oldest_key_time": 0, "file_creation_time": 1765795191, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d36e3d93-cef6-4482-9c71-0054ae87e0c9", "db_session_id": "8WPMBXYVT9DSSQWRN3T3", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:39:51.463780) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 11261187 bytes
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:39:51.465234) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.0 rd, 142.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 11.0 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(200.2) write-amplify(98.3) OK, records in: 3744, records dropped: 514 output_compression: NoCompression
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:39:51.465269) EVENT_LOG_v1 {"time_micros": 1765795191465254, "job": 6, "event": "compaction_finished", "compaction_time_micros": 78891, "compaction_time_cpu_micros": 25329, "output_level": 6, "num_output_files": 1, "total_output_size": 11261187, "num_input_records": 3744, "num_output_records": 3230, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795191465466, "job": 6, "event": "table_file_deletion", "file_number": 22}
Dec 15 10:39:51 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795191469320, "job": 6, "event": "table_file_deletion", "file_number": 20}
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:39:51.384469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:39:51.469360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:39:51.469367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:39:51.469369) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:39:51.469371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 15 10:39:51 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:39:51.469372) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 15 10:39:51 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Dec 15 10:39:51 compute-0 ceph-mgr[74651]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 15 10:39:51 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.difmqj(active, since 107s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:39:51 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 15 10:39:51 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Dec 15 10:39:51 compute-0 ceph-mon[74356]: 11.7 scrub starts
Dec 15 10:39:51 compute-0 ceph-mon[74356]: 11.7 scrub ok
Dec 15 10:39:51 compute-0 ceph-mon[74356]: osdmap e84: 3 total, 3 up, 3 in
Dec 15 10:39:51 compute-0 ceph-mon[74356]: 11.1f scrub starts
Dec 15 10:39:51 compute-0 ceph-mon[74356]: 11.1f scrub ok
Dec 15 10:39:51 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec 15 10:39:51 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:51 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Dec 15 10:39:51 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 85 pg[10.1a( v 54'1067 (0'0,54'1067] local-lis/les=84/85 n=5 ec=59/47 lis/c=68/68 les/c/f=69/69/0 sis=84) [1]/[0] async=[1] r=0 lpr=84 pi=[68,84)/1 crt=54'1067 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:51 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 85 pg[10.a( v 54'1067 (0'0,54'1067] local-lis/les=84/85 n=6 ec=59/47 lis/c=68/68 les/c/f=69/69/0 sis=84) [1]/[0] async=[1] r=0 lpr=84 pi=[68,84)/1 crt=54'1067 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:39:51 compute-0 sshd-session[90600]: Connection closed by 192.168.122.100 port 53996
Dec 15 10:39:51 compute-0 sshd-session[90565]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 15 10:39:51 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Dec 15 10:39:51 compute-0 systemd[1]: session-35.scope: Consumed 47.515s CPU time.
Dec 15 10:39:51 compute-0 systemd-logind[797]: Session 35 logged out. Waiting for processes to exit.
Dec 15 10:39:51 compute-0 systemd-logind[797]: Removed session 35.
Dec 15 10:39:51 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ignoring --setuser ceph since I am not root
Dec 15 10:39:51 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ignoring --setgroup ceph since I am not root
Dec 15 10:39:51 compute-0 ceph-mgr[74651]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 15 10:39:51 compute-0 ceph-mgr[74651]: pidfile_write: ignore empty --pid-file
Dec 15 10:39:51 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'alerts'
Dec 15 10:39:51 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:51.736+0000 7fb7eca96140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 15 10:39:51 compute-0 ceph-mgr[74651]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 15 10:39:51 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'balancer'
Dec 15 10:39:51 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:51.821+0000 7fb7eca96140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 15 10:39:51 compute-0 ceph-mgr[74651]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 15 10:39:51 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'cephadm'
Dec 15 10:39:52 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Dec 15 10:39:52 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Dec 15 10:39:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:52 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc0042e0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:52 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:39:52 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:39:52 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:39:52.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:39:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:52 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:52 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:39:52 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:39:52 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:39:52.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:39:52 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Dec 15 10:39:52 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'crash'
Dec 15 10:39:52 compute-0 ceph-mon[74356]: 11.4 scrub starts
Dec 15 10:39:52 compute-0 ceph-mon[74356]: 11.4 scrub ok
Dec 15 10:39:52 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Dec 15 10:39:52 compute-0 ceph-mon[74356]: mgrmap e28: compute-0.difmqj(active, since 107s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:39:52 compute-0 ceph-mon[74356]: from='mgr.14349 192.168.122.100:0/3343105132' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 15 10:39:52 compute-0 ceph-mon[74356]: osdmap e85: 3 total, 3 up, 3 in
Dec 15 10:39:52 compute-0 ceph-mon[74356]: 10.1 scrub starts
Dec 15 10:39:52 compute-0 ceph-mon[74356]: 10.1 scrub ok
Dec 15 10:39:52 compute-0 ceph-mon[74356]: 11.10 scrub starts
Dec 15 10:39:52 compute-0 ceph-mon[74356]: 11.10 scrub ok
Dec 15 10:39:52 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Dec 15 10:39:52 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Dec 15 10:39:52 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 86 pg[10.1a( v 54'1067 (0'0,54'1067] local-lis/les=84/85 n=5 ec=59/47 lis/c=84/68 les/c/f=85/69/0 sis=86 pruub=14.869279861s) [1] async=[1] r=-1 lpr=86 pi=[68,86)/1 crt=54'1067 mlcod 54'1067 active pruub 207.210220337s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:52 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 86 pg[10.a( v 54'1067 (0'0,54'1067] local-lis/les=84/85 n=6 ec=59/47 lis/c=84/68 les/c/f=85/69/0 sis=86 pruub=14.870643616s) [1] async=[1] r=-1 lpr=86 pi=[68,86)/1 crt=54'1067 mlcod 54'1067 active pruub 207.211608887s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:39:52 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 86 pg[10.1a( v 54'1067 (0'0,54'1067] local-lis/les=84/85 n=5 ec=59/47 lis/c=84/68 les/c/f=85/69/0 sis=86 pruub=14.869230270s) [1] r=-1 lpr=86 pi=[68,86)/1 crt=54'1067 mlcod 0'0 unknown NOTIFY pruub 207.210220337s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:52 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 86 pg[10.a( v 54'1067 (0'0,54'1067] local-lis/les=84/85 n=6 ec=59/47 lis/c=84/68 les/c/f=85/69/0 sis=86 pruub=14.870585442s) [1] r=-1 lpr=86 pi=[68,86)/1 crt=54'1067 mlcod 0'0 unknown NOTIFY pruub 207.211608887s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:39:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:52.671+0000 7fb7eca96140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 15 10:39:52 compute-0 ceph-mgr[74651]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 15 10:39:52 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'dashboard'
Dec 15 10:39:53 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Dec 15 10:39:53 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Dec 15 10:39:53 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'devicehealth'
Dec 15 10:39:53 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:53 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003c70 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:53 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:53.300+0000 7fb7eca96140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 15 10:39:53 compute-0 ceph-mgr[74651]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 15 10:39:53 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'diskprediction_local'
Dec 15 10:39:53 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 15 10:39:53 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 15 10:39:53 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]:   from numpy import show_config as show_numpy_config
Dec 15 10:39:53 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:53.456+0000 7fb7eca96140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 15 10:39:53 compute-0 ceph-mgr[74651]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 15 10:39:53 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'influx'
Dec 15 10:39:53 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:53.524+0000 7fb7eca96140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 15 10:39:53 compute-0 ceph-mgr[74651]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 15 10:39:53 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'insights'
Dec 15 10:39:53 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'iostat'
Dec 15 10:39:53 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Dec 15 10:39:53 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:53.705+0000 7fb7eca96140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 15 10:39:53 compute-0 ceph-mgr[74651]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 15 10:39:53 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'k8sevents'
Dec 15 10:39:53 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Dec 15 10:39:53 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Dec 15 10:39:53 compute-0 ceph-mon[74356]: 11.5 scrub starts
Dec 15 10:39:53 compute-0 ceph-mon[74356]: 11.5 scrub ok
Dec 15 10:39:53 compute-0 ceph-mon[74356]: osdmap e86: 3 total, 3 up, 3 in
Dec 15 10:39:53 compute-0 ceph-mon[74356]: 10.4 deep-scrub starts
Dec 15 10:39:53 compute-0 ceph-mon[74356]: 10.4 deep-scrub ok
Dec 15 10:39:53 compute-0 ceph-mon[74356]: 8.13 scrub starts
Dec 15 10:39:53 compute-0 ceph-mon[74356]: 8.13 scrub ok
Dec 15 10:39:53 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 11.11 deep-scrub starts
Dec 15 10:39:54 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 11.11 deep-scrub ok
Dec 15 10:39:54 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'localpool'
Dec 15 10:39:54 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'mds_autoscaler'
Dec 15 10:39:54 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:54 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0003ea0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:54 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'mirroring'
Dec 15 10:39:54 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:54 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:54 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:39:54 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:39:54 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:39:54.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:39:54 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:39:54 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000030s ======
Dec 15 10:39:54 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:39:54.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 15 10:39:54 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'nfs'
Dec 15 10:39:54 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:54.730+0000 7fb7eca96140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 15 10:39:54 compute-0 ceph-mgr[74651]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 15 10:39:54 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'orchestrator'
Dec 15 10:39:54 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:54.979+0000 7fb7eca96140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 15 10:39:54 compute-0 ceph-mgr[74651]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 15 10:39:54 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'osd_perf_query'
Dec 15 10:39:54 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 12.10 scrub starts
Dec 15 10:39:55 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 12.10 scrub ok
Dec 15 10:39:55 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:55.064+0000 7fb7eca96140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 15 10:39:55 compute-0 ceph-mgr[74651]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 15 10:39:55 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'osd_support'
Dec 15 10:39:55 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:55.131+0000 7fb7eca96140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 15 10:39:55 compute-0 ceph-mgr[74651]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 15 10:39:55 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'pg_autoscaler'
Dec 15 10:39:55 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Dec 15 10:39:55 compute-0 ceph-mon[74356]: 9.a scrub starts
Dec 15 10:39:55 compute-0 ceph-mon[74356]: 9.a scrub ok
Dec 15 10:39:55 compute-0 ceph-mon[74356]: 10.3 scrub starts
Dec 15 10:39:55 compute-0 ceph-mon[74356]: 10.3 scrub ok
Dec 15 10:39:55 compute-0 ceph-mon[74356]: osdmap e87: 3 total, 3 up, 3 in
Dec 15 10:39:55 compute-0 ceph-mon[74356]: 11.11 deep-scrub starts
Dec 15 10:39:55 compute-0 ceph-mon[74356]: 11.11 deep-scrub ok
Dec 15 10:39:55 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Dec 15 10:39:55 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Dec 15 10:39:55 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:55.215+0000 7fb7eca96140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 15 10:39:55 compute-0 ceph-mgr[74651]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 15 10:39:55 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'progress'
Dec 15 10:39:55 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:55 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc0042e0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:55 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:55.291+0000 7fb7eca96140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 15 10:39:55 compute-0 ceph-mgr[74651]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 15 10:39:55 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'prometheus'
Dec 15 10:39:55 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:55.632+0000 7fb7eca96140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 15 10:39:55 compute-0 ceph-mgr[74651]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 15 10:39:55 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'rbd_support'
Dec 15 10:39:55 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:55.731+0000 7fb7eca96140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 15 10:39:55 compute-0 ceph-mgr[74651]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 15 10:39:55 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'restful'
Dec 15 10:39:55 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'rgw'
Dec 15 10:39:56 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 12.6 scrub starts
Dec 15 10:39:56 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 12.6 scrub ok
Dec 15 10:39:56 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:56.182+0000 7fb7eca96140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 15 10:39:56 compute-0 ceph-mgr[74651]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 15 10:39:56 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'rook'
Dec 15 10:39:56 compute-0 ceph-mon[74356]: 8.8 scrub starts
Dec 15 10:39:56 compute-0 ceph-mon[74356]: 8.8 scrub ok
Dec 15 10:39:56 compute-0 ceph-mon[74356]: 9.18 scrub starts
Dec 15 10:39:56 compute-0 ceph-mon[74356]: 9.18 scrub ok
Dec 15 10:39:56 compute-0 ceph-mon[74356]: 12.10 scrub starts
Dec 15 10:39:56 compute-0 ceph-mon[74356]: 12.10 scrub ok
Dec 15 10:39:56 compute-0 ceph-mon[74356]: osdmap e88: 3 total, 3 up, 3 in
Dec 15 10:39:56 compute-0 ceph-mon[74356]: 9.d scrub starts
Dec 15 10:39:56 compute-0 ceph-mon[74356]: 9.d scrub ok
Dec 15 10:39:56 compute-0 ceph-mon[74356]: 12.9 scrub starts
Dec 15 10:39:56 compute-0 ceph-mon[74356]: 12.9 scrub ok
Dec 15 10:39:56 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Dec 15 10:39:56 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Dec 15 10:39:56 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Dec 15 10:39:56 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:56 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003c90 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:56 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:39:56 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:56 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:56 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:39:56 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:39:56 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:39:56.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:39:56 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:39:56 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:39:56 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:39:56.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:39:56 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:56.825+0000 7fb7eca96140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 15 10:39:56 compute-0 ceph-mgr[74651]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 15 10:39:56 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'selftest'
Dec 15 10:39:56 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:56.907+0000 7fb7eca96140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 15 10:39:56 compute-0 ceph-mgr[74651]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 15 10:39:56 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'snap_schedule'
Dec 15 10:39:56 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:56.996+0000 7fb7eca96140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 15 10:39:56 compute-0 ceph-mgr[74651]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 15 10:39:56 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'stats'
Dec 15 10:39:57 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 12.12 scrub starts
Dec 15 10:39:57 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 12.12 scrub ok
Dec 15 10:39:57 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'status'
Dec 15 10:39:57 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:57.154+0000 7fb7eca96140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 15 10:39:57 compute-0 ceph-mgr[74651]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 15 10:39:57 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'telegraf'
Dec 15 10:39:57 compute-0 ceph-mon[74356]: 12.6 scrub starts
Dec 15 10:39:57 compute-0 ceph-mon[74356]: 12.6 scrub ok
Dec 15 10:39:57 compute-0 ceph-mon[74356]: osdmap e89: 3 total, 3 up, 3 in
Dec 15 10:39:57 compute-0 ceph-mon[74356]: 8.4 scrub starts
Dec 15 10:39:57 compute-0 ceph-mon[74356]: 8.4 scrub ok
Dec 15 10:39:57 compute-0 ceph-mon[74356]: 8.a scrub starts
Dec 15 10:39:57 compute-0 ceph-mon[74356]: 8.a scrub ok
Dec 15 10:39:57 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:57.233+0000 7fb7eca96140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 15 10:39:57 compute-0 ceph-mgr[74651]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 15 10:39:57 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'telemetry'
Dec 15 10:39:57 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:57 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0003ea0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:57 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:57.397+0000 7fb7eca96140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 15 10:39:57 compute-0 ceph-mgr[74651]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 15 10:39:57 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'test_orchestrator'
Dec 15 10:39:57 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:57.640+0000 7fb7eca96140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 15 10:39:57 compute-0 ceph-mgr[74651]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 15 10:39:57 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'volumes'
Dec 15 10:39:57 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:57.928+0000 7fb7eca96140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 15 10:39:57 compute-0 ceph-mgr[74651]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 15 10:39:57 compute-0 ceph-mgr[74651]: mgr[py] Loading python module 'zabbix'
Dec 15 10:39:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:58.000+0000 7fb7eca96140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 15 10:39:58 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : Active manager daemon compute-0.difmqj restarted
Dec 15 10:39:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Dec 15 10:39:58 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.difmqj
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: ms_deliver_dispatch: unhandled message 0x5567f64eb860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 15 10:39:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Dec 15 10:39:58 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 12.b scrub starts
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: mgr handle_mgr_map Activating!
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: mgr handle_mgr_map I am now activating
Dec 15 10:39:58 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Dec 15 10:39:58 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.difmqj(active, starting, since 0.0454118s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:39:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 15 10:39:58 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 15 10:39:58 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 15 10:39:58 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 12.b scrub ok
Dec 15 10:39:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.fathlc"} v 0)
Dec 15 10:39:58 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.fathlc"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).mds e8 all = 0
Dec 15 10:39:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.mmswte"} v 0)
Dec 15 10:39:58 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.mmswte"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).mds e8 all = 0
Dec 15 10:39:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.mhljub"} v 0)
Dec 15 10:39:58 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.mhljub"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).mds e8 all = 0
Dec 15 10:39:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.difmqj", "id": "compute-0.difmqj"} v 0)
Dec 15 10:39:58 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.difmqj", "id": "compute-0.difmqj"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.tlqguq", "id": "compute-1.tlqguq"} v 0)
Dec 15 10:39:58 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.tlqguq", "id": "compute-1.tlqguq"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.gxhwsu", "id": "compute-2.gxhwsu"} v 0)
Dec 15 10:39:58 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.gxhwsu", "id": "compute-2.gxhwsu"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 15 10:39:58 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 15 10:39:58 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 15 10:39:58 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 15 10:39:58 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).mds e8 all = 1
Dec 15 10:39:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 15 10:39:58 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 15 10:39:58 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: balancer
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [balancer INFO root] Starting
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:39:58 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : Manager daemon compute-0.difmqj is now available
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [balancer INFO root] Optimize plan auto_2025-12-15_10:39:58
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: cephadm
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: crash
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: dashboard
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO access_control] Loading user roles DB version=2
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO sso] Loading SSO DB version=1
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: devicehealth
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [devicehealth INFO root] Starting
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: iostat
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: nfs
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: orchestrator
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: pg_autoscaler
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: progress
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:39:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [progress INFO root] Loading...
Dec 15 10:39:58 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: prometheus
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7fb76d5ded60>, <progress.module.GhostEvent object at 0x7fb76d5ded90>, <progress.module.GhostEvent object at 0x7fb76d5deca0>, <progress.module.GhostEvent object at 0x7fb76d5decd0>, <progress.module.GhostEvent object at 0x7fb76d5ded00>, <progress.module.GhostEvent object at 0x7fb76d5dec10>, <progress.module.GhostEvent object at 0x7fb76d5dec40>, <progress.module.GhostEvent object at 0x7fb76d5dec70>, <progress.module.GhostEvent object at 0x7fb76d5deb50>, <progress.module.GhostEvent object at 0x7fb76d5deb80>, <progress.module.GhostEvent object at 0x7fb76d5debb0>, <progress.module.GhostEvent object at 0x7fb76d5de8e0>, <progress.module.GhostEvent object at 0x7fb76d5de730>, <progress.module.GhostEvent object at 0x7fb76d5de790>, <progress.module.GhostEvent object at 0x7fb76d5de7c0>, <progress.module.GhostEvent object at 0x7fb76d5de7f0>, <progress.module.GhostEvent object at 0x7fb76d5de820>, <progress.module.GhostEvent object at 0x7fb76d5de850>, <progress.module.GhostEvent object at 0x7fb76d5de880>, <progress.module.GhostEvent object at 0x7fb76d5de8b0>, <progress.module.GhostEvent object at 0x7fb76d5de9a0>, <progress.module.GhostEvent object at 0x7fb76d5de9d0>, <progress.module.GhostEvent object at 0x7fb76d5dea00>, <progress.module.GhostEvent object at 0x7fb76d5dea30>, <progress.module.GhostEvent object at 0x7fb76d5dea60>, <progress.module.GhostEvent object at 0x7fb76d5dea90>, <progress.module.GhostEvent object at 0x7fb76d5deac0>, <progress.module.GhostEvent object at 0x7fb76d5deaf0>, <progress.module.GhostEvent object at 0x7fb76d5deb20>] historic events
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [progress INFO root] Loaded OSDMap, ready.
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [prometheus INFO root] server_addr: :: server_port: 9283
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [prometheus INFO root] Cache enabled
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [prometheus INFO root] starting metric collection thread
Dec 15 10:39:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 15 10:39:58 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [prometheus INFO root] Starting engine...
Dec 15 10:39:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: [15/Dec/2025:10:39:58] ENGINE Bus STARTING
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.error] [15/Dec/2025:10:39:58] ENGINE Bus STARTING
Dec 15 10:39:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: CherryPy Checker:
Dec 15 10:39:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: The Application mounted at '' has an empty config.
Dec 15 10:39:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] recovery thread starting
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] starting setup
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: rbd_support
Dec 15 10:39:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/mirror_snapshot_schedule"} v 0)
Dec 15 10:39:58 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/mirror_snapshot_schedule"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: restful
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [restful INFO root] server_addr: :: server_port: 8003
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [restful WARNING root] server not running: no certificate configured
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: status
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: telemetry
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 15 10:39:58 compute-0 ceph-mon[74356]: 12.12 scrub starts
Dec 15 10:39:58 compute-0 ceph-mon[74356]: 12.12 scrub ok
Dec 15 10:39:58 compute-0 ceph-mon[74356]: 11.12 scrub starts
Dec 15 10:39:58 compute-0 ceph-mon[74356]: 11.12 scrub ok
Dec 15 10:39:58 compute-0 ceph-mon[74356]: 11.16 scrub starts
Dec 15 10:39:58 compute-0 ceph-mon[74356]: 11.16 scrub ok
Dec 15 10:39:58 compute-0 ceph-mon[74356]: Active manager daemon compute-0.difmqj restarted
Dec 15 10:39:58 compute-0 ceph-mon[74356]: Activating manager daemon compute-0.difmqj
Dec 15 10:39:58 compute-0 ceph-mon[74356]: osdmap e90: 3 total, 3 up, 3 in
Dec 15 10:39:58 compute-0 ceph-mon[74356]: mgrmap e29: compute-0.difmqj(active, starting, since 0.0454118s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:39:58 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.fathlc"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.mmswte"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.mhljub"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.difmqj", "id": "compute-0.difmqj"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.tlqguq", "id": "compute-1.tlqguq"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.gxhwsu", "id": "compute-2.gxhwsu"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: Manager daemon compute-0.difmqj is now available
Dec 15 10:39:58 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:39:58 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/mirror_snapshot_schedule"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] PerfHandler: starting
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: mgr load Constructed class from module: volumes
Dec 15 10:39:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:58 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc0042e0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec 15 10:39:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:58.287+0000 7fb75b1eb640 -1 client.0 error registering admin socket command: (17) File exists
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: client.0 error registering admin socket command: (17) File exists
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_task_task: images, start_after=
Dec 15 10:39:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:58.293+0000 7fb7541dd640 -1 client.0 error registering admin socket command: (17) File exists
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: client.0 error registering admin socket command: (17) File exists
Dec 15 10:39:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:58.293+0000 7fb7541dd640 -1 client.0 error registering admin socket command: (17) File exists
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: client.0 error registering admin socket command: (17) File exists
Dec 15 10:39:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:58.293+0000 7fb7541dd640 -1 client.0 error registering admin socket command: (17) File exists
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: client.0 error registering admin socket command: (17) File exists
Dec 15 10:39:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:58.293+0000 7fb7541dd640 -1 client.0 error registering admin socket command: (17) File exists
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: client.0 error registering admin socket command: (17) File exists
Dec 15 10:39:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: 2025-12-15T10:39:58.293+0000 7fb7541dd640 -1 client.0 error registering admin socket command: (17) File exists
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: client.0 error registering admin socket command: (17) File exists
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] TaskHandler: starting
Dec 15 10:39:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/trash_purge_schedule"} v 0)
Dec 15 10:39:58 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/trash_purge_schedule"}]: dispatch
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] setup complete
Dec 15 10:39:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: [15/Dec/2025:10:39:58] ENGINE Serving on http://:::9283
Dec 15 10:39:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: [15/Dec/2025:10:39:58] ENGINE Bus STARTED
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.error] [15/Dec/2025:10:39:58] ENGINE Serving on http://:::9283
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.error] [15/Dec/2025:10:39:58] ENGINE Bus STARTED
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [prometheus INFO root] Engine started.
Dec 15 10:39:58 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gxhwsu restarted
Dec 15 10:39:58 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gxhwsu started
Dec 15 10:39:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:58 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003cb0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:58 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:39:58 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:39:58 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:39:58.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:39:58 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec 15 10:39:58 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:39:58 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:39:58.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec 15 10:39:58 compute-0 sshd-session[98067]: Accepted publickey for ceph-admin from 192.168.122.100 port 44576 ssh2: RSA SHA256:3azC1xJ0J6DfmukNQYX52hxlEZBJA1o57dtmni4O/KI
Dec 15 10:39:58 compute-0 systemd-logind[797]: New session 37 of user ceph-admin.
Dec 15 10:39:58 compute-0 systemd[1]: Started Session 37 of User ceph-admin.
Dec 15 10:39:58 compute-0 sshd-session[98067]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 15 10:39:58 compute-0 ceph-mgr[74651]: [dashboard INFO dashboard.module] Engine started.
Dec 15 10:39:58 compute-0 sudo[98083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:39:58 compute-0 sudo[98083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:39:58 compute-0 sudo[98083]: pam_unix(sudo:session): session closed for user root
Dec 15 10:39:58 compute-0 sudo[98109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 15 10:39:58 compute-0 sudo[98109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:39:59 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 12.e scrub starts
Dec 15 10:39:59 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 12.e scrub ok
Dec 15 10:39:59 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:39:59 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:39:59 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.tlqguq restarted
Dec 15 10:39:59 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.tlqguq started
Dec 15 10:39:59 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.difmqj(active, since 1.44153s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:39:59 compute-0 podman[98206]: 2025-12-15 10:39:59.456308788 +0000 UTC m=+0.143184285 container exec 79ed8dc51b1852fd831764c2817cfa3745ae4937bc9015bdb965e376ab3cc58d (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:39:59 compute-0 ceph-mon[74356]: 12.b scrub starts
Dec 15 10:39:59 compute-0 ceph-mon[74356]: 12.b scrub ok
Dec 15 10:39:59 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.difmqj/trash_purge_schedule"}]: dispatch
Dec 15 10:39:59 compute-0 ceph-mon[74356]: Standby manager daemon compute-2.gxhwsu restarted
Dec 15 10:39:59 compute-0 ceph-mon[74356]: Standby manager daemon compute-2.gxhwsu started
Dec 15 10:39:59 compute-0 ceph-mon[74356]: 11.8 deep-scrub starts
Dec 15 10:39:59 compute-0 ceph-mon[74356]: 11.8 deep-scrub ok
Dec 15 10:39:59 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v3: 353 pgs: 353 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:39:59 compute-0 ceph-mgr[74651]: [cephadm INFO cherrypy.error] [15/Dec/2025:10:39:59] ENGINE Bus STARTING
Dec 15 10:39:59 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : [15/Dec/2025:10:39:59] ENGINE Bus STARTING
Dec 15 10:39:59 compute-0 ceph-mgr[74651]: [cephadm INFO cherrypy.error] [15/Dec/2025:10:39:59] ENGINE Serving on https://192.168.122.100:7150
Dec 15 10:39:59 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : [15/Dec/2025:10:39:59] ENGINE Serving on https://192.168.122.100:7150
Dec 15 10:39:59 compute-0 ceph-mgr[74651]: [cephadm INFO cherrypy.error] [15/Dec/2025:10:39:59] ENGINE Client ('192.168.122.100', 34074) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 15 10:39:59 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : [15/Dec/2025:10:39:59] ENGINE Client ('192.168.122.100', 34074) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 15 10:39:59 compute-0 ceph-mgr[74651]: [cephadm INFO cherrypy.error] [15/Dec/2025:10:39:59] ENGINE Serving on http://192.168.122.100:8765
Dec 15 10:39:59 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : [15/Dec/2025:10:39:59] ENGINE Serving on http://192.168.122.100:8765
Dec 15 10:39:59 compute-0 ceph-mgr[74651]: [cephadm INFO cherrypy.error] [15/Dec/2025:10:39:59] ENGINE Bus STARTED
Dec 15 10:39:59 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : [15/Dec/2025:10:39:59] ENGINE Bus STARTED
Dec 15 10:39:59 compute-0 sudo[98283]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvrepsbambpnzdcmcmgujksvvdrearon ; /usr/bin/python3'
Dec 15 10:39:59 compute-0 sudo[98283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:40:00 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 12.c scrub starts
Dec 15 10:40:00 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 12.c scrub ok
Dec 15 10:40:00 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v4: 353 pgs: 353 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:00 compute-0 python3[98285]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:40:00 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:00 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0003ea0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:00 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:00 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:00 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:00 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:00 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:00.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:00 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:00 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 15 10:40:00 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:00.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 15 10:40:01 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 12.a deep-scrub starts
Dec 15 10:40:01 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:01 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4000f30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:01 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Dec 15 10:40:01 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec 15 10:40:01 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:40:01 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:40:01 compute-0 ceph-mon[74356]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 15 10:40:01 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:40:01 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 12.a deep-scrub ok
Dec 15 10:40:01 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:01 compute-0 podman[98206]: 2025-12-15 10:40:01.853799381 +0000 UTC m=+2.540674888 container exec_died 79ed8dc51b1852fd831764c2817cfa3745ae4937bc9015bdb965e376ab3cc58d (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:40:01 compute-0 ceph-mgr[74651]: [devicehealth INFO root] Check health
Dec 15 10:40:01 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:40:01 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Dec 15 10:40:01 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:01 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e31: compute-0.difmqj(active, since 3s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:40:01 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:40:01 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 15 10:40:01 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Dec 15 10:40:01 compute-0 ceph-mon[74356]: 11.f scrub starts
Dec 15 10:40:01 compute-0 ceph-mon[74356]: 11.f scrub ok
Dec 15 10:40:01 compute-0 ceph-mon[74356]: 12.e scrub starts
Dec 15 10:40:01 compute-0 ceph-mon[74356]: 12.e scrub ok
Dec 15 10:40:01 compute-0 ceph-mon[74356]: 11.1 deep-scrub starts
Dec 15 10:40:01 compute-0 ceph-mon[74356]: 11.1 deep-scrub ok
Dec 15 10:40:01 compute-0 ceph-mon[74356]: Standby manager daemon compute-1.tlqguq restarted
Dec 15 10:40:01 compute-0 ceph-mon[74356]: Standby manager daemon compute-1.tlqguq started
Dec 15 10:40:01 compute-0 ceph-mon[74356]: mgrmap e30: compute-0.difmqj(active, since 1.44153s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:40:01 compute-0 ceph-mon[74356]: [15/Dec/2025:10:39:59] ENGINE Bus STARTING
Dec 15 10:40:01 compute-0 ceph-mon[74356]: [15/Dec/2025:10:39:59] ENGINE Serving on https://192.168.122.100:7150
Dec 15 10:40:01 compute-0 ceph-mon[74356]: [15/Dec/2025:10:39:59] ENGINE Client ('192.168.122.100', 34074) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 15 10:40:01 compute-0 ceph-mon[74356]: 12.3 scrub starts
Dec 15 10:40:01 compute-0 ceph-mon[74356]: 12.3 scrub ok
Dec 15 10:40:01 compute-0 ceph-mon[74356]: [15/Dec/2025:10:39:59] ENGINE Serving on http://192.168.122.100:8765
Dec 15 10:40:01 compute-0 ceph-mon[74356]: [15/Dec/2025:10:39:59] ENGINE Bus STARTED
Dec 15 10:40:01 compute-0 ceph-mon[74356]: 12.c scrub starts
Dec 15 10:40:01 compute-0 ceph-mon[74356]: 12.c scrub ok
Dec 15 10:40:01 compute-0 ceph-mon[74356]: pgmap v4: 353 pgs: 353 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:01 compute-0 ceph-mon[74356]: 8.10 scrub starts
Dec 15 10:40:01 compute-0 ceph-mon[74356]: 8.10 scrub ok
Dec 15 10:40:01 compute-0 ceph-mon[74356]: 12.1e scrub starts
Dec 15 10:40:01 compute-0 ceph-mon[74356]: 12.1e scrub ok
Dec 15 10:40:01 compute-0 ceph-mon[74356]: 12.a deep-scrub starts
Dec 15 10:40:01 compute-0 ceph-mon[74356]: 11.19 scrub starts
Dec 15 10:40:01 compute-0 ceph-mon[74356]: 11.19 scrub ok
Dec 15 10:40:01 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec 15 10:40:01 compute-0 ceph-mon[74356]: overall HEALTH_OK
Dec 15 10:40:01 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:02 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:02 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Dec 15 10:40:02 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 12.1c deep-scrub starts
Dec 15 10:40:02 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v6: 353 pgs: 353 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:02 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Dec 15 10:40:02 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec 15 10:40:02 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 12.1c deep-scrub ok
Dec 15 10:40:02 compute-0 podman[98286]: 2025-12-15 10:40:01.999646483 +0000 UTC m=+1.871795575 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:40:02 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:02 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003d60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:02 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:02 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0003ea0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:02 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:02 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:02 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:02.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:02 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:02 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000030s ======
Dec 15 10:40:02 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:02.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 15 10:40:02 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:02 compute-0 podman[98286]: 2025-12-15 10:40:02.463547857 +0000 UTC m=+2.335696919 container create 724752fda8385b829db670b8682f857bd44e938711ff4e3a0de64bffff272059 (image=quay.io/ceph/ceph:v19, name=magical_euclid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Dec 15 10:40:02 compute-0 systemd[1]: Started libpod-conmon-724752fda8385b829db670b8682f857bd44e938711ff4e3a0de64bffff272059.scope.
Dec 15 10:40:02 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:40:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeff0a3421e540d1b7fc9511d007ea9d46a12383c3153c6fd65e1070da4426c8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeff0a3421e540d1b7fc9511d007ea9d46a12383c3153c6fd65e1070da4426c8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:02 compute-0 podman[98286]: 2025-12-15 10:40:02.570828452 +0000 UTC m=+2.442977524 container init 724752fda8385b829db670b8682f857bd44e938711ff4e3a0de64bffff272059 (image=quay.io/ceph/ceph:v19, name=magical_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:40:02 compute-0 podman[98286]: 2025-12-15 10:40:02.57763418 +0000 UTC m=+2.449783242 container start 724752fda8385b829db670b8682f857bd44e938711ff4e3a0de64bffff272059 (image=quay.io/ceph/ceph:v19, name=magical_euclid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 15 10:40:02 compute-0 podman[98286]: 2025-12-15 10:40:02.580768792 +0000 UTC m=+2.452917854 container attach 724752fda8385b829db670b8682f857bd44e938711ff4e3a0de64bffff272059 (image=quay.io/ceph/ceph:v19, name=magical_euclid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True)
Dec 15 10:40:02 compute-0 magical_euclid[98331]: ERROR: invalid flag --daemon-type
Dec 15 10:40:02 compute-0 systemd[1]: libpod-724752fda8385b829db670b8682f857bd44e938711ff4e3a0de64bffff272059.scope: Deactivated successfully.
Dec 15 10:40:02 compute-0 podman[98286]: 2025-12-15 10:40:02.637106908 +0000 UTC m=+2.509255990 container died 724752fda8385b829db670b8682f857bd44e938711ff4e3a0de64bffff272059 (image=quay.io/ceph/ceph:v19, name=magical_euclid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:40:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-eeff0a3421e540d1b7fc9511d007ea9d46a12383c3153c6fd65e1070da4426c8-merged.mount: Deactivated successfully.
Dec 15 10:40:02 compute-0 podman[98286]: 2025-12-15 10:40:02.754988652 +0000 UTC m=+2.627137714 container remove 724752fda8385b829db670b8682f857bd44e938711ff4e3a0de64bffff272059 (image=quay.io/ceph/ceph:v19, name=magical_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 15 10:40:02 compute-0 systemd[1]: libpod-conmon-724752fda8385b829db670b8682f857bd44e938711ff4e3a0de64bffff272059.scope: Deactivated successfully.
Dec 15 10:40:02 compute-0 sudo[98283]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:02 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:40:02] "GET /metrics HTTP/1.1" 200 46571 "" "Prometheus/2.51.0"
Dec 15 10:40:02 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:40:02] "GET /metrics HTTP/1.1" 200 46571 "" "Prometheus/2.51.0"
Dec 15 10:40:03 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Dec 15 10:40:03 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 12.19 deep-scrub starts
Dec 15 10:40:03 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e32: compute-0.difmqj(active, since 5s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:40:03 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 12.19 deep-scrub ok
Dec 15 10:40:03 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 15 10:40:03 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Dec 15 10:40:03 compute-0 ceph-mon[74356]: 8.1b scrub starts
Dec 15 10:40:03 compute-0 ceph-mon[74356]: 8.1b scrub ok
Dec 15 10:40:03 compute-0 ceph-mon[74356]: 12.a deep-scrub ok
Dec 15 10:40:03 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:03 compute-0 ceph-mon[74356]: mgrmap e31: compute-0.difmqj(active, since 3s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:40:03 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 15 10:40:03 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:03 compute-0 ceph-mon[74356]: osdmap e91: 3 total, 3 up, 3 in
Dec 15 10:40:03 compute-0 ceph-mon[74356]: 12.1c deep-scrub starts
Dec 15 10:40:03 compute-0 ceph-mon[74356]: pgmap v6: 353 pgs: 353 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:03 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec 15 10:40:03 compute-0 ceph-mon[74356]: 12.1c deep-scrub ok
Dec 15 10:40:03 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:03 compute-0 ceph-mon[74356]: 8.c scrub starts
Dec 15 10:40:03 compute-0 ceph-mon[74356]: 8.c scrub ok
Dec 15 10:40:03 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 92 pg[10.d( v 54'1067 (0'0,54'1067] local-lis/les=74/75 n=8 ec=59/47 lis/c=74/74 les/c/f=75/75/0 sis=92 pruub=12.254065514s) [1] r=-1 lpr=92 pi=[74,92)/1 crt=54'1067 mlcod 0'0 active pruub 214.978713989s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:40:03 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 92 pg[10.d( v 54'1067 (0'0,54'1067] local-lis/les=74/75 n=8 ec=59/47 lis/c=74/74 les/c/f=75/75/0 sis=92 pruub=12.254027367s) [1] r=-1 lpr=92 pi=[74,92)/1 crt=54'1067 mlcod 0'0 unknown NOTIFY pruub 214.978713989s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:40:03 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Dec 15 10:40:03 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 92 pg[10.1d( v 54'1067 (0'0,54'1067] local-lis/les=74/75 n=5 ec=59/47 lis/c=74/74 les/c/f=75/75/0 sis=92 pruub=12.253194809s) [1] r=-1 lpr=92 pi=[74,92)/1 crt=54'1067 mlcod 0'0 active pruub 214.978591919s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:40:03 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 92 pg[10.1d( v 54'1067 (0'0,54'1067] local-lis/les=74/75 n=5 ec=59/47 lis/c=74/74 les/c/f=75/75/0 sis=92 pruub=12.253173828s) [1] r=-1 lpr=92 pi=[74,92)/1 crt=54'1067 mlcod 0'0 unknown NOTIFY pruub 214.978591919s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:40:03 compute-0 podman[98436]: 2025-12-15 10:40:03.064504376 +0000 UTC m=+0.082131511 container exec af7cec967afbf19ae8aa93bce2194c8b8c2dd9c500f2d7f44ece310d3a1d4cc1 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:03 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:40:03 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:03 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:40:03 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:03 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec 15 10:40:03 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 15 10:40:03 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Dec 15 10:40:03 compute-0 podman[98436]: 2025-12-15 10:40:03.103873426 +0000 UTC m=+0.121500511 container exec_died af7cec967afbf19ae8aa93bce2194c8b8c2dd9c500f2d7f44ece310d3a1d4cc1 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:03 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:03 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0003ea0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:03 compute-0 podman[98527]: 2025-12-15 10:40:03.425278918 +0000 UTC m=+0.062833667 container exec c0f38bc539f687c54b7eb01f058f3691b3a24f961ff23e54889334c42825f1f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:40:03 compute-0 podman[98527]: 2025-12-15 10:40:03.446687373 +0000 UTC m=+0.084242062 container exec_died c0f38bc539f687c54b7eb01f058f3691b3a24f961ff23e54889334c42825f1f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 15 10:40:03 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:40:03 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:03 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:40:03 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:03 compute-0 podman[98590]: 2025-12-15 10:40:03.679407433 +0000 UTC m=+0.069798630 container exec 55276bcc9605a59cf148bdf11fc4eb753ca401193f2349bda31c6714ea73d19e (image=quay.io/ceph/haproxy:2.3, name=ceph-77365f67-614e-5a8d-b658-640395550c79-haproxy-nfs-cephfs-compute-0-ykblqa)
Dec 15 10:40:03 compute-0 podman[98590]: 2025-12-15 10:40:03.685070599 +0000 UTC m=+0.075461756 container exec_died 55276bcc9605a59cf148bdf11fc4eb753ca401193f2349bda31c6714ea73d19e (image=quay.io/ceph/haproxy:2.3, name=ceph-77365f67-614e-5a8d-b658-640395550c79-haproxy-nfs-cephfs-compute-0-ykblqa)
Dec 15 10:40:03 compute-0 podman[98655]: 2025-12-15 10:40:03.896412673 +0000 UTC m=+0.056266075 container exec eb383ee2660aa4da80291a00f1592a5a221d1461ab8a14db8d6bf5db83163774 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-nfs-cephfs-compute-0-gdchmd, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, name=keepalived, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, release=1793)
Dec 15 10:40:03 compute-0 podman[98655]: 2025-12-15 10:40:03.907866298 +0000 UTC m=+0.067719670 container exec_died eb383ee2660aa4da80291a00f1592a5a221d1461ab8a14db8d6bf5db83163774 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-nfs-cephfs-compute-0-gdchmd, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., release=1793, distribution-scope=public, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, name=keepalived)
Dec 15 10:40:04 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 12.8 scrub starts
Dec 15 10:40:04 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 12.8 scrub ok
Dec 15 10:40:04 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Dec 15 10:40:04 compute-0 ceph-mon[74356]: 8.18 scrub starts
Dec 15 10:40:04 compute-0 ceph-mon[74356]: 8.18 scrub ok
Dec 15 10:40:04 compute-0 ceph-mon[74356]: 12.19 deep-scrub starts
Dec 15 10:40:04 compute-0 ceph-mon[74356]: mgrmap e32: compute-0.difmqj(active, since 5s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:40:04 compute-0 ceph-mon[74356]: 12.19 deep-scrub ok
Dec 15 10:40:04 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 15 10:40:04 compute-0 ceph-mon[74356]: osdmap e92: 3 total, 3 up, 3 in
Dec 15 10:40:04 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:04 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:04 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 15 10:40:04 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Dec 15 10:40:04 compute-0 ceph-mon[74356]: 9.e deep-scrub starts
Dec 15 10:40:04 compute-0 ceph-mon[74356]: 9.e deep-scrub ok
Dec 15 10:40:04 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:04 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:04 compute-0 ceph-mon[74356]: 12.4 deep-scrub starts
Dec 15 10:40:04 compute-0 ceph-mon[74356]: 12.4 deep-scrub ok
Dec 15 10:40:04 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v8: 353 pgs: 353 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:04 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Dec 15 10:40:04 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec 15 10:40:04 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Dec 15 10:40:04 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Dec 15 10:40:04 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 93 pg[10.d( v 54'1067 (0'0,54'1067] local-lis/les=74/75 n=8 ec=59/47 lis/c=74/74 les/c/f=75/75/0 sis=93) [1]/[0] r=0 lpr=93 pi=[74,93)/1 crt=54'1067 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:40:04 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 93 pg[10.d( v 54'1067 (0'0,54'1067] local-lis/les=74/75 n=8 ec=59/47 lis/c=74/74 les/c/f=75/75/0 sis=93) [1]/[0] r=0 lpr=93 pi=[74,93)/1 crt=54'1067 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 15 10:40:04 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 93 pg[10.1d( v 54'1067 (0'0,54'1067] local-lis/les=74/75 n=5 ec=59/47 lis/c=74/74 les/c/f=75/75/0 sis=93) [1]/[0] r=0 lpr=93 pi=[74,93)/1 crt=54'1067 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:40:04 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 93 pg[10.1d( v 54'1067 (0'0,54'1067] local-lis/les=74/75 n=5 ec=59/47 lis/c=74/74 les/c/f=75/75/0 sis=93) [1]/[0] r=0 lpr=93 pi=[74,93)/1 crt=54'1067 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 15 10:40:04 compute-0 podman[98718]: 2025-12-15 10:40:04.160431388 +0000 UTC m=+0.069305626 container exec 73244696e42fd889cd12e816ba79afefa66014c2436355b7b29fda12a488ec50 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:04 compute-0 podman[98718]: 2025-12-15 10:40:04.1878733 +0000 UTC m=+0.096747518 container exec_died 73244696e42fd889cd12e816ba79afefa66014c2436355b7b29fda12a488ec50 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:04 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:04 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4001e60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:04 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:04 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:04 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003d80 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:04 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 15 10:40:04 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:04.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 15 10:40:04 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:04 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000030s ======
Dec 15 10:40:04 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:04.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 15 10:40:04 compute-0 podman[98793]: 2025-12-15 10:40:04.467071148 +0000 UTC m=+0.070798080 container exec 23d65a61651e1c0b44eb1273a6940cc9a6f8c6b86de8d6db3e4162bb6add8e05 (image=quay.io/ceph/grafana:10.4.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:40:04 compute-0 podman[98793]: 2025-12-15 10:40:04.663674232 +0000 UTC m=+0.267401104 container exec_died 23d65a61651e1c0b44eb1273a6940cc9a6f8c6b86de8d6db3e4162bb6add8e05 (image=quay.io/ceph/grafana:10.4.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:40:04 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Dec 15 10:40:05 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Dec 15 10:40:05 compute-0 ceph-mon[74356]: 12.8 scrub starts
Dec 15 10:40:05 compute-0 ceph-mon[74356]: 12.8 scrub ok
Dec 15 10:40:05 compute-0 ceph-mon[74356]: pgmap v8: 353 pgs: 353 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:05 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec 15 10:40:05 compute-0 ceph-mon[74356]: osdmap e93: 3 total, 3 up, 3 in
Dec 15 10:40:05 compute-0 ceph-mon[74356]: 9.9 scrub starts
Dec 15 10:40:05 compute-0 ceph-mon[74356]: 9.9 scrub ok
Dec 15 10:40:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Dec 15 10:40:05 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 15 10:40:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Dec 15 10:40:05 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Dec 15 10:40:05 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 94 pg[10.d( v 54'1067 (0'0,54'1067] local-lis/les=93/94 n=8 ec=59/47 lis/c=74/74 les/c/f=75/75/0 sis=93) [1]/[0] async=[1] r=0 lpr=93 pi=[74,93)/1 crt=54'1067 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:40:05 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 94 pg[10.1d( v 54'1067 (0'0,54'1067] local-lis/les=93/94 n=5 ec=59/47 lis/c=74/74 les/c/f=75/75/0 sis=93) [1]/[0] async=[1] r=0 lpr=93 pi=[74,93)/1 crt=54'1067 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:40:05 compute-0 podman[98902]: 2025-12-15 10:40:05.122612822 +0000 UTC m=+0.065345600 container exec 811bf452ef3ce2cd1829f815f500a947c1c9dba459a203c1e31a5cfea42f0cfb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:05 compute-0 podman[98902]: 2025-12-15 10:40:05.17562062 +0000 UTC m=+0.118353368 container exec_died 811bf452ef3ce2cd1829f815f500a947c1c9dba459a203c1e31a5cfea42f0cfb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:05 compute-0 sudo[98109]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:40:05 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:40:05 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:05 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:05 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:05 compute-0 sudo[98946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:40:05 compute-0 sudo[98946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:05 compute-0 sudo[98946]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:40:05 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:40:05 compute-0 sudo[98971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 15 10:40:05 compute-0 sudo[98971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:05 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec 15 10:40:05 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 15 10:40:05 compute-0 sudo[98971]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:05 compute-0 sudo[99026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:40:05 compute-0 sudo[99026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:05 compute-0 sudo[99026]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:06 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Dec 15 10:40:06 compute-0 sudo[99051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 15 10:40:06 compute-0 sudo[99051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:06 compute-0 ceph-osd[82838]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Dec 15 10:40:06 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v11: 353 pgs: 2 remapped+peering, 2 active+remapped, 349 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 14 op/s; 0 B/s, 1 objects/s recovering
Dec 15 10:40:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Dec 15 10:40:06 compute-0 ceph-mon[74356]: 10.2 scrub starts
Dec 15 10:40:06 compute-0 ceph-mon[74356]: 10.2 scrub ok
Dec 15 10:40:06 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 15 10:40:06 compute-0 ceph-mon[74356]: osdmap e94: 3 total, 3 up, 3 in
Dec 15 10:40:06 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:06 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:06 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:06 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:06 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 15 10:40:06 compute-0 ceph-mon[74356]: 8.b scrub starts
Dec 15 10:40:06 compute-0 ceph-mon[74356]: 8.b scrub ok
Dec 15 10:40:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Dec 15 10:40:06 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Dec 15 10:40:06 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 95 pg[10.1d( v 54'1067 (0'0,54'1067] local-lis/les=93/94 n=5 ec=59/47 lis/c=93/74 les/c/f=94/75/0 sis=95 pruub=14.980875015s) [1] async=[1] r=-1 lpr=95 pi=[74,95)/1 crt=54'1067 mlcod 54'1067 active pruub 220.781997681s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:40:06 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 95 pg[10.d( v 54'1067 (0'0,54'1067] local-lis/les=93/94 n=8 ec=59/47 lis/c=93/74 les/c/f=94/75/0 sis=95 pruub=14.980811119s) [1] async=[1] r=-1 lpr=95 pi=[74,95)/1 crt=54'1067 mlcod 54'1067 active pruub 220.781967163s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:40:06 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 95 pg[10.1d( v 54'1067 (0'0,54'1067] local-lis/les=93/94 n=5 ec=59/47 lis/c=93/74 les/c/f=94/75/0 sis=95 pruub=14.980813980s) [1] r=-1 lpr=95 pi=[74,95)/1 crt=54'1067 mlcod 0'0 unknown NOTIFY pruub 220.781997681s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:40:06 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 95 pg[10.d( v 54'1067 (0'0,54'1067] local-lis/les=93/94 n=8 ec=59/47 lis/c=93/74 les/c/f=94/75/0 sis=95 pruub=14.980746269s) [1] r=-1 lpr=95 pi=[74,95)/1 crt=54'1067 mlcod 0'0 unknown NOTIFY pruub 220.781967163s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:40:06 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:06 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0003ea0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:06 compute-0 sudo[99051]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:40:06 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:06 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4001e60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:06 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:06 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:06 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:06.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:06 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:06 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:06 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:06.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:06 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:40:06 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 15 10:40:06 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 15 10:40:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:40:06 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:40:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 15 10:40:06 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 15 10:40:06 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 15 10:40:06 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 15 10:40:06 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec 15 10:40:06 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec 15 10:40:06 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec 15 10:40:06 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec 15 10:40:06 compute-0 sudo[99094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 15 10:40:06 compute-0 sudo[99094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:06 compute-0 sudo[99094]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:06 compute-0 sudo[99119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph
Dec 15 10:40:06 compute-0 sudo[99119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:06 compute-0 sudo[99119]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:06 compute-0 sudo[99144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.conf.new
Dec 15 10:40:06 compute-0 sudo[99144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:06 compute-0 sudo[99144]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:06 compute-0 sudo[99169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:40:06 compute-0 sudo[99169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:06 compute-0 sudo[99169]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:40:06 compute-0 sudo[99194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.conf.new
Dec 15 10:40:06 compute-0 sudo[99194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:06 compute-0 sudo[99194]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:06 compute-0 sudo[99242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.conf.new
Dec 15 10:40:06 compute-0 sudo[99242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:06 compute-0 sudo[99242]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:07 compute-0 sudo[99267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.conf.new
Dec 15 10:40:07 compute-0 sudo[99267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:07 compute-0 sudo[99267]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:07 compute-0 sudo[99292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 15 10:40:07 compute-0 sudo[99292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:07 compute-0 sudo[99292]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:07 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:40:07 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:40:07 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:40:07 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:40:07 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Dec 15 10:40:07 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:40:07 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:40:07 compute-0 sudo[99317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config
Dec 15 10:40:07 compute-0 sudo[99317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:07 compute-0 sudo[99317]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:07 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Dec 15 10:40:07 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Dec 15 10:40:07 compute-0 ceph-mon[74356]: 10.18 scrub starts
Dec 15 10:40:07 compute-0 ceph-mon[74356]: 10.18 scrub ok
Dec 15 10:40:07 compute-0 ceph-mon[74356]: pgmap v11: 353 pgs: 2 remapped+peering, 2 active+remapped, 349 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 14 op/s; 0 B/s, 1 objects/s recovering
Dec 15 10:40:07 compute-0 ceph-mon[74356]: 10.17 scrub starts
Dec 15 10:40:07 compute-0 ceph-mon[74356]: 10.17 scrub ok
Dec 15 10:40:07 compute-0 ceph-mon[74356]: osdmap e95: 3 total, 3 up, 3 in
Dec 15 10:40:07 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:07 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:07 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 15 10:40:07 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:40:07 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 15 10:40:07 compute-0 ceph-mon[74356]: Updating compute-0:/etc/ceph/ceph.conf
Dec 15 10:40:07 compute-0 ceph-mon[74356]: Updating compute-1:/etc/ceph/ceph.conf
Dec 15 10:40:07 compute-0 ceph-mon[74356]: Updating compute-2:/etc/ceph/ceph.conf
Dec 15 10:40:07 compute-0 ceph-mon[74356]: 8.5 scrub starts
Dec 15 10:40:07 compute-0 ceph-mon[74356]: 8.5 scrub ok
Dec 15 10:40:07 compute-0 sudo[99342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config
Dec 15 10:40:07 compute-0 sudo[99342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:07 compute-0 sudo[99342]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:07 compute-0 sudo[99367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf.new
Dec 15 10:40:07 compute-0 sudo[99367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:07 compute-0 sudo[99367]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:07 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:07 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003da0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:07 compute-0 sudo[99393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:40:07 compute-0 sudo[99393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:07 compute-0 sudo[99393]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:07 compute-0 sudo[99418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf.new
Dec 15 10:40:07 compute-0 sudo[99418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:07 compute-0 sudo[99418]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:07 compute-0 sudo[99466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf.new
Dec 15 10:40:07 compute-0 sudo[99466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:07 compute-0 sudo[99466]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:07 compute-0 sudo[99491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf.new
Dec 15 10:40:07 compute-0 sudo[99491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:07 compute-0 sudo[99491]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:07 compute-0 sudo[99516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf.new /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:40:07 compute-0 sudo[99516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:07 compute-0 sudo[99516]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:07 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:40:07 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:40:07 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:40:07 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:40:07 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:40:07 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:40:07 compute-0 sudo[99541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 15 10:40:07 compute-0 sudo[99541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:07 compute-0 sudo[99541]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:07 compute-0 sudo[99566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph
Dec 15 10:40:07 compute-0 sudo[99566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:07 compute-0 sudo[99566]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:07 compute-0 sudo[99591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.client.admin.keyring.new
Dec 15 10:40:07 compute-0 sudo[99591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:07 compute-0 sudo[99591]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:07 compute-0 sudo[99616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:40:07 compute-0 sudo[99616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:07 compute-0 sudo[99616]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:07 compute-0 sudo[99641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.client.admin.keyring.new
Dec 15 10:40:07 compute-0 sudo[99641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:07 compute-0 sudo[99641]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:08 compute-0 sudo[99689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.client.admin.keyring.new
Dec 15 10:40:08 compute-0 sudo[99689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:08 compute-0 sudo[99689]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:08 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v14: 353 pgs: 2 remapped+peering, 2 active+remapped, 349 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 14 op/s; 0 B/s, 1 objects/s recovering
Dec 15 10:40:08 compute-0 sudo[99714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.client.admin.keyring.new
Dec 15 10:40:08 compute-0 sudo[99714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:08 compute-0 sudo[99714]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:08 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:40:08 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:40:08 compute-0 sudo[99739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 15 10:40:08 compute-0 sudo[99739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:08 compute-0 sudo[99739]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:08 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:40:08 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:40:08 compute-0 ceph-mon[74356]: 10.f scrub starts
Dec 15 10:40:08 compute-0 ceph-mon[74356]: 10.f scrub ok
Dec 15 10:40:08 compute-0 ceph-mon[74356]: Updating compute-0:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:40:08 compute-0 ceph-mon[74356]: Updating compute-1:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:40:08 compute-0 ceph-mon[74356]: Updating compute-2:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.conf
Dec 15 10:40:08 compute-0 ceph-mon[74356]: osdmap e96: 3 total, 3 up, 3 in
Dec 15 10:40:08 compute-0 ceph-mon[74356]: 11.17 scrub starts
Dec 15 10:40:08 compute-0 ceph-mon[74356]: 11.17 scrub ok
Dec 15 10:40:08 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:40:08 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:40:08 compute-0 sudo[99764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config
Dec 15 10:40:08 compute-0 sudo[99764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:08 compute-0 sudo[99764]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:08 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:08 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003da0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:08 compute-0 sudo[99789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config
Dec 15 10:40:08 compute-0 sudo[99789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:08 compute-0 sudo[99789]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:08 compute-0 sudo[99814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring.new
Dec 15 10:40:08 compute-0 sudo[99814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:08 compute-0 sudo[99814]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:08 compute-0 sudo[99839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:40:08 compute-0 sudo[99839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:08 compute-0 sudo[99839]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:08 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:08 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4001e60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:08 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:08 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000030s ======
Dec 15 10:40:08 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:08.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 15 10:40:08 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:08 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:08 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:08.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:08 compute-0 sudo[99864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring.new
Dec 15 10:40:08 compute-0 sudo[99864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:08 compute-0 sudo[99864]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:08 compute-0 sudo[99913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring.new
Dec 15 10:40:08 compute-0 sudo[99913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:08 compute-0 sudo[99913]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:08 compute-0 sudo[99938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring.new
Dec 15 10:40:08 compute-0 sudo[99938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:08 compute-0 sudo[99938]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:08 compute-0 sudo[99963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-77365f67-614e-5a8d-b658-640395550c79/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring.new /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:40:08 compute-0 sudo[99963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:08 compute-0 sudo[99963]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:08 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:40:08 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:40:08 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:08 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:40:08 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:08 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:40:08 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:40:08 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:08 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:08 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:08 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:40:08 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:08 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 15 10:40:08 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:08 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 15 10:40:08 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:08 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 15 10:40:08 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 15 10:40:08 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 15 10:40:08 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 15 10:40:08 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:40:08 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:40:08 compute-0 sudo[99988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:40:08 compute-0 sudo[99988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:08 compute-0 sudo[99988]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:09 compute-0 sudo[100013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 77365f67-614e-5a8d-b658-640395550c79 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 15 10:40:09 compute-0 sudo[100013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:09 compute-0 ceph-mon[74356]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:40:09 compute-0 ceph-mon[74356]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:40:09 compute-0 ceph-mon[74356]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 15 10:40:09 compute-0 ceph-mon[74356]: pgmap v14: 353 pgs: 2 remapped+peering, 2 active+remapped, 349 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 14 op/s; 0 B/s, 1 objects/s recovering
Dec 15 10:40:09 compute-0 ceph-mon[74356]: 10.a scrub starts
Dec 15 10:40:09 compute-0 ceph-mon[74356]: 10.a scrub ok
Dec 15 10:40:09 compute-0 ceph-mon[74356]: Updating compute-2:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:40:09 compute-0 ceph-mon[74356]: Updating compute-0:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:40:09 compute-0 ceph-mon[74356]: Updating compute-1:/var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/config/ceph.client.admin.keyring
Dec 15 10:40:09 compute-0 ceph-mon[74356]: 11.e scrub starts
Dec 15 10:40:09 compute-0 ceph-mon[74356]: 11.e scrub ok
Dec 15 10:40:09 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:09 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:09 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:09 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:09 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:09 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:09 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:09 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:09 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 15 10:40:09 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 15 10:40:09 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:40:09 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:09 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0003ea0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:09 compute-0 podman[100078]: 2025-12-15 10:40:09.508851704 +0000 UTC m=+0.051701452 container create cb263ddf08593aa4d704807a773d350957821eab52b1baa6e470a763e93d1725 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_babbage, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:40:09 compute-0 systemd[90580]: Starting Mark boot as successful...
Dec 15 10:40:09 compute-0 systemd[90580]: Finished Mark boot as successful.
Dec 15 10:40:09 compute-0 systemd[1]: Started libpod-conmon-cb263ddf08593aa4d704807a773d350957821eab52b1baa6e470a763e93d1725.scope.
Dec 15 10:40:09 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:40:09 compute-0 podman[100078]: 2025-12-15 10:40:09.487353225 +0000 UTC m=+0.030202983 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:40:09 compute-0 podman[100078]: 2025-12-15 10:40:09.600701677 +0000 UTC m=+0.143551445 container init cb263ddf08593aa4d704807a773d350957821eab52b1baa6e470a763e93d1725 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 15 10:40:09 compute-0 podman[100078]: 2025-12-15 10:40:09.610597456 +0000 UTC m=+0.153447224 container start cb263ddf08593aa4d704807a773d350957821eab52b1baa6e470a763e93d1725 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_babbage, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 15 10:40:09 compute-0 podman[100078]: 2025-12-15 10:40:09.614259693 +0000 UTC m=+0.157109501 container attach cb263ddf08593aa4d704807a773d350957821eab52b1baa6e470a763e93d1725 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_babbage, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:40:09 compute-0 nervous_babbage[100095]: 167 167
Dec 15 10:40:09 compute-0 systemd[1]: libpod-cb263ddf08593aa4d704807a773d350957821eab52b1baa6e470a763e93d1725.scope: Deactivated successfully.
Dec 15 10:40:09 compute-0 podman[100078]: 2025-12-15 10:40:09.616827818 +0000 UTC m=+0.159677566 container died cb263ddf08593aa4d704807a773d350957821eab52b1baa6e470a763e93d1725 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_babbage, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 15 10:40:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-f715cc678c92bc27b30e00318d7ca6fe870f146dd8806cd733c356e664ed0b0b-merged.mount: Deactivated successfully.
Dec 15 10:40:09 compute-0 podman[100078]: 2025-12-15 10:40:09.661488043 +0000 UTC m=+0.204337811 container remove cb263ddf08593aa4d704807a773d350957821eab52b1baa6e470a763e93d1725 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:40:09 compute-0 systemd[1]: libpod-conmon-cb263ddf08593aa4d704807a773d350957821eab52b1baa6e470a763e93d1725.scope: Deactivated successfully.
Dec 15 10:40:09 compute-0 podman[100119]: 2025-12-15 10:40:09.828479972 +0000 UTC m=+0.045548711 container create 0afde3a509934682eb86eaaddab5f04915952026299c065dce3877381f0addab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 15 10:40:09 compute-0 systemd[1]: Started libpod-conmon-0afde3a509934682eb86eaaddab5f04915952026299c065dce3877381f0addab.scope.
Dec 15 10:40:09 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:40:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16f1c6fccfdaa34af2c691030e00103d9600c957d4db45256c99c06fdb96747d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16f1c6fccfdaa34af2c691030e00103d9600c957d4db45256c99c06fdb96747d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16f1c6fccfdaa34af2c691030e00103d9600c957d4db45256c99c06fdb96747d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16f1c6fccfdaa34af2c691030e00103d9600c957d4db45256c99c06fdb96747d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16f1c6fccfdaa34af2c691030e00103d9600c957d4db45256c99c06fdb96747d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:09 compute-0 podman[100119]: 2025-12-15 10:40:09.810638532 +0000 UTC m=+0.027707301 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:40:09 compute-0 podman[100119]: 2025-12-15 10:40:09.913123366 +0000 UTC m=+0.130192105 container init 0afde3a509934682eb86eaaddab5f04915952026299c065dce3877381f0addab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:40:09 compute-0 podman[100119]: 2025-12-15 10:40:09.923281723 +0000 UTC m=+0.140350452 container start 0afde3a509934682eb86eaaddab5f04915952026299c065dce3877381f0addab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_buck, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:40:09 compute-0 podman[100119]: 2025-12-15 10:40:09.926056294 +0000 UTC m=+0.143125033 container attach 0afde3a509934682eb86eaaddab5f04915952026299c065dce3877381f0addab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_buck, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:40:10 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v15: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Dec 15 10:40:10 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 15 10:40:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Dec 15 10:40:10 compute-0 ceph-mon[74356]: 10.9 scrub starts
Dec 15 10:40:10 compute-0 ceph-mon[74356]: 10.9 scrub ok
Dec 15 10:40:10 compute-0 ceph-mon[74356]: 8.d scrub starts
Dec 15 10:40:10 compute-0 ceph-mon[74356]: 8.d scrub ok
Dec 15 10:40:10 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 15 10:40:10 compute-0 boring_buck[100135]: --> passed data devices: 0 physical, 1 LVM
Dec 15 10:40:10 compute-0 boring_buck[100135]: --> All data devices are unavailable
Dec 15 10:40:10 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:10 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:10 compute-0 systemd[1]: libpod-0afde3a509934682eb86eaaddab5f04915952026299c065dce3877381f0addab.scope: Deactivated successfully.
Dec 15 10:40:10 compute-0 podman[100119]: 2025-12-15 10:40:10.309744655 +0000 UTC m=+0.526813394 container died 0afde3a509934682eb86eaaddab5f04915952026299c065dce3877381f0addab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_buck, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 15 10:40:10 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 15 10:40:10 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Dec 15 10:40:10 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Dec 15 10:40:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-16f1c6fccfdaa34af2c691030e00103d9600c957d4db45256c99c06fdb96747d-merged.mount: Deactivated successfully.
Dec 15 10:40:10 compute-0 podman[100119]: 2025-12-15 10:40:10.365822213 +0000 UTC m=+0.582890952 container remove 0afde3a509934682eb86eaaddab5f04915952026299c065dce3877381f0addab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_buck, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:40:10 compute-0 systemd[1]: libpod-conmon-0afde3a509934682eb86eaaddab5f04915952026299c065dce3877381f0addab.scope: Deactivated successfully.
Dec 15 10:40:10 compute-0 sudo[100013]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:10 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:10 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:10 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003dc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:10 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:10 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:10.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:10 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:10 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:10 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:10.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:10 compute-0 sudo[100161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:40:10 compute-0 sudo[100161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:10 compute-0 sudo[100161]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:10 compute-0 sudo[100187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 77365f67-614e-5a8d-b658-640395550c79 -- lvm list --format json
Dec 15 10:40:10 compute-0 sudo[100187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:10 compute-0 podman[100253]: 2025-12-15 10:40:10.961779997 +0000 UTC m=+0.058507841 container create 4686e834f80de3bc7b353e34f55adf427f8533e98e3f5bdbd6e27948cb9e083f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_gauss, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 15 10:40:10 compute-0 systemd[1]: Started libpod-conmon-4686e834f80de3bc7b353e34f55adf427f8533e98e3f5bdbd6e27948cb9e083f.scope.
Dec 15 10:40:11 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:40:11 compute-0 podman[100253]: 2025-12-15 10:40:10.940984669 +0000 UTC m=+0.037712533 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:40:11 compute-0 podman[100253]: 2025-12-15 10:40:11.042888477 +0000 UTC m=+0.139616311 container init 4686e834f80de3bc7b353e34f55adf427f8533e98e3f5bdbd6e27948cb9e083f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_gauss, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 15 10:40:11 compute-0 podman[100253]: 2025-12-15 10:40:11.052663902 +0000 UTC m=+0.149391706 container start 4686e834f80de3bc7b353e34f55adf427f8533e98e3f5bdbd6e27948cb9e083f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:40:11 compute-0 cranky_gauss[100269]: 167 167
Dec 15 10:40:11 compute-0 systemd[1]: libpod-4686e834f80de3bc7b353e34f55adf427f8533e98e3f5bdbd6e27948cb9e083f.scope: Deactivated successfully.
Dec 15 10:40:11 compute-0 podman[100253]: 2025-12-15 10:40:11.058105971 +0000 UTC m=+0.154833805 container attach 4686e834f80de3bc7b353e34f55adf427f8533e98e3f5bdbd6e27948cb9e083f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_gauss, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 15 10:40:11 compute-0 podman[100253]: 2025-12-15 10:40:11.058314737 +0000 UTC m=+0.155042551 container died 4686e834f80de3bc7b353e34f55adf427f8533e98e3f5bdbd6e27948cb9e083f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_gauss, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:40:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-af50bfa076e3523de66500a4fce264408bc47e38f4914b87ef6b985f5b4cd7d9-merged.mount: Deactivated successfully.
Dec 15 10:40:11 compute-0 podman[100253]: 2025-12-15 10:40:11.098007147 +0000 UTC m=+0.194734971 container remove 4686e834f80de3bc7b353e34f55adf427f8533e98e3f5bdbd6e27948cb9e083f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_gauss, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 15 10:40:11 compute-0 systemd[1]: libpod-conmon-4686e834f80de3bc7b353e34f55adf427f8533e98e3f5bdbd6e27948cb9e083f.scope: Deactivated successfully.
Dec 15 10:40:11 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:11 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4001e60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:11 compute-0 podman[100295]: 2025-12-15 10:40:11.31786112 +0000 UTC m=+0.061758555 container create d097b8f62149873b853d7222f3653d175e0cd83e027285ba9bf44adbfaacb416 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:40:11 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Dec 15 10:40:11 compute-0 systemd[1]: Started libpod-conmon-d097b8f62149873b853d7222f3653d175e0cd83e027285ba9bf44adbfaacb416.scope.
Dec 15 10:40:11 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:40:11 compute-0 podman[100295]: 2025-12-15 10:40:11.288098601 +0000 UTC m=+0.031996076 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:40:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b6f5d3ceedbe555bd1f5440470e788d40afa64583dfa5a1424ec38f0201dad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b6f5d3ceedbe555bd1f5440470e788d40afa64583dfa5a1424ec38f0201dad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b6f5d3ceedbe555bd1f5440470e788d40afa64583dfa5a1424ec38f0201dad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b6f5d3ceedbe555bd1f5440470e788d40afa64583dfa5a1424ec38f0201dad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:11 compute-0 podman[100295]: 2025-12-15 10:40:11.406664466 +0000 UTC m=+0.150561891 container init d097b8f62149873b853d7222f3653d175e0cd83e027285ba9bf44adbfaacb416 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_diffie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 15 10:40:11 compute-0 podman[100295]: 2025-12-15 10:40:11.414178725 +0000 UTC m=+0.158076150 container start d097b8f62149873b853d7222f3653d175e0cd83e027285ba9bf44adbfaacb416 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 15 10:40:11 compute-0 podman[100295]: 2025-12-15 10:40:11.419321395 +0000 UTC m=+0.163218840 container attach d097b8f62149873b853d7222f3653d175e0cd83e027285ba9bf44adbfaacb416 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 15 10:40:11 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Dec 15 10:40:11 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Dec 15 10:40:11 compute-0 ceph-mon[74356]: pgmap v15: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:11 compute-0 ceph-mon[74356]: 9.6 scrub starts
Dec 15 10:40:11 compute-0 ceph-mon[74356]: 9.6 scrub ok
Dec 15 10:40:11 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 15 10:40:11 compute-0 ceph-mon[74356]: osdmap e97: 3 total, 3 up, 3 in
Dec 15 10:40:11 compute-0 ceph-mon[74356]: 11.3 scrub starts
Dec 15 10:40:11 compute-0 ceph-mon[74356]: 11.3 scrub ok
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]: {
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:     "0": [
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:         {
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:             "devices": [
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:                 "/dev/loop3"
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:             ],
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:             "lv_name": "ceph_lv0",
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:             "lv_size": "21470642176",
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MCSOGR-6V69-UVGq-8FpK-DBjb-LyDE-FLUPxJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=77365f67-614e-5a8d-b658-640395550c79,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7690eca0-4e87-4157-a045-1912448da925,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:             "lv_uuid": "MCSOGR-6V69-UVGq-8FpK-DBjb-LyDE-FLUPxJ",
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:             "name": "ceph_lv0",
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:             "tags": {
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:                 "ceph.block_uuid": "MCSOGR-6V69-UVGq-8FpK-DBjb-LyDE-FLUPxJ",
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:                 "ceph.cephx_lockbox_secret": "",
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:                 "ceph.cluster_fsid": "77365f67-614e-5a8d-b658-640395550c79",
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:                 "ceph.cluster_name": "ceph",
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:                 "ceph.crush_device_class": "",
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:                 "ceph.encrypted": "0",
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:                 "ceph.osd_fsid": "7690eca0-4e87-4157-a045-1912448da925",
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:                 "ceph.osd_id": "0",
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:                 "ceph.type": "block",
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:                 "ceph.vdo": "0",
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:                 "ceph.with_tpm": "0"
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:             },
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:             "type": "block",
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:             "vg_name": "ceph_vg0"
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:         }
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]:     ]
Dec 15 10:40:11 compute-0 dreamy_diffie[100312]: }
Dec 15 10:40:11 compute-0 systemd[1]: libpod-d097b8f62149873b853d7222f3653d175e0cd83e027285ba9bf44adbfaacb416.scope: Deactivated successfully.
Dec 15 10:40:11 compute-0 podman[100295]: 2025-12-15 10:40:11.750237855 +0000 UTC m=+0.494135290 container died d097b8f62149873b853d7222f3653d175e0cd83e027285ba9bf44adbfaacb416 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_diffie, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 15 10:40:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1b6f5d3ceedbe555bd1f5440470e788d40afa64583dfa5a1424ec38f0201dad-merged.mount: Deactivated successfully.
Dec 15 10:40:11 compute-0 podman[100295]: 2025-12-15 10:40:11.807023884 +0000 UTC m=+0.550921299 container remove d097b8f62149873b853d7222f3653d175e0cd83e027285ba9bf44adbfaacb416 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:40:11 compute-0 systemd[1]: libpod-conmon-d097b8f62149873b853d7222f3653d175e0cd83e027285ba9bf44adbfaacb416.scope: Deactivated successfully.
Dec 15 10:40:11 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:40:11 compute-0 sudo[100187]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:11 compute-0 sudo[100334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:40:11 compute-0 sudo[100334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:11 compute-0 sudo[100334]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:11 compute-0 sudo[100359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 77365f67-614e-5a8d-b658-640395550c79 -- raw list --format json
Dec 15 10:40:11 compute-0 sudo[100359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:12 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v18: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:12 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Dec 15 10:40:12 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec 15 10:40:12 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:12 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0003ea0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:12 compute-0 podman[100424]: 2025-12-15 10:40:12.401856504 +0000 UTC m=+0.040584606 container create 2952a784e6b12368d71c2bb686f6b6e47295b4d2b9acb22ff9e1cd2c6105063b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:40:12 compute-0 systemd[1]: Started libpod-conmon-2952a784e6b12368d71c2bb686f6b6e47295b4d2b9acb22ff9e1cd2c6105063b.scope.
Dec 15 10:40:12 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:12 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:12 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:12.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:12 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:12 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:12 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Dec 15 10:40:12 compute-0 ceph-mon[74356]: 11.14 scrub starts
Dec 15 10:40:12 compute-0 ceph-mon[74356]: 11.14 scrub ok
Dec 15 10:40:12 compute-0 ceph-mon[74356]: osdmap e98: 3 total, 3 up, 3 in
Dec 15 10:40:12 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec 15 10:40:12 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:12 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:12 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:12.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:12 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 15 10:40:12 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Dec 15 10:40:12 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Dec 15 10:40:12 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:40:12 compute-0 podman[100424]: 2025-12-15 10:40:12.386605089 +0000 UTC m=+0.025333221 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:40:12 compute-0 podman[100424]: 2025-12-15 10:40:12.490771852 +0000 UTC m=+0.129499994 container init 2952a784e6b12368d71c2bb686f6b6e47295b4d2b9acb22ff9e1cd2c6105063b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_neumann, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec 15 10:40:12 compute-0 podman[100424]: 2025-12-15 10:40:12.499385774 +0000 UTC m=+0.138113896 container start 2952a784e6b12368d71c2bb686f6b6e47295b4d2b9acb22ff9e1cd2c6105063b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_neumann, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:40:12 compute-0 podman[100424]: 2025-12-15 10:40:12.503458513 +0000 UTC m=+0.142186625 container attach 2952a784e6b12368d71c2bb686f6b6e47295b4d2b9acb22ff9e1cd2c6105063b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_neumann, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:40:12 compute-0 tender_neumann[100440]: 167 167
Dec 15 10:40:12 compute-0 systemd[1]: libpod-2952a784e6b12368d71c2bb686f6b6e47295b4d2b9acb22ff9e1cd2c6105063b.scope: Deactivated successfully.
Dec 15 10:40:12 compute-0 podman[100424]: 2025-12-15 10:40:12.507651535 +0000 UTC m=+0.146379647 container died 2952a784e6b12368d71c2bb686f6b6e47295b4d2b9acb22ff9e1cd2c6105063b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:40:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-22d068435f565785dd6753315d9895df8385cfdc6c368aa4aa10c0dab6f54d4a-merged.mount: Deactivated successfully.
Dec 15 10:40:12 compute-0 podman[100424]: 2025-12-15 10:40:12.547149699 +0000 UTC m=+0.185877811 container remove 2952a784e6b12368d71c2bb686f6b6e47295b4d2b9acb22ff9e1cd2c6105063b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 15 10:40:12 compute-0 systemd[1]: libpod-conmon-2952a784e6b12368d71c2bb686f6b6e47295b4d2b9acb22ff9e1cd2c6105063b.scope: Deactivated successfully.
Dec 15 10:40:12 compute-0 podman[100464]: 2025-12-15 10:40:12.693022102 +0000 UTC m=+0.046794548 container create 6bead8f52777481374f56c533dfb59a449645d1414d983595802df06ac78c9b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lewin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 15 10:40:12 compute-0 systemd[1]: Started libpod-conmon-6bead8f52777481374f56c533dfb59a449645d1414d983595802df06ac78c9b3.scope.
Dec 15 10:40:12 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1674c70b002785745ff35b9b5efd056c28f4c3c2d23869290f82a42b1ddff834/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1674c70b002785745ff35b9b5efd056c28f4c3c2d23869290f82a42b1ddff834/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1674c70b002785745ff35b9b5efd056c28f4c3c2d23869290f82a42b1ddff834/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1674c70b002785745ff35b9b5efd056c28f4c3c2d23869290f82a42b1ddff834/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:12 compute-0 podman[100464]: 2025-12-15 10:40:12.759965467 +0000 UTC m=+0.113737953 container init 6bead8f52777481374f56c533dfb59a449645d1414d983595802df06ac78c9b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lewin, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 15 10:40:12 compute-0 podman[100464]: 2025-12-15 10:40:12.765629093 +0000 UTC m=+0.119401529 container start 6bead8f52777481374f56c533dfb59a449645d1414d983595802df06ac78c9b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 15 10:40:12 compute-0 podman[100464]: 2025-12-15 10:40:12.672602435 +0000 UTC m=+0.026374921 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:40:12 compute-0 podman[100464]: 2025-12-15 10:40:12.768680992 +0000 UTC m=+0.122453458 container attach 6bead8f52777481374f56c533dfb59a449645d1414d983595802df06ac78c9b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lewin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 15 10:40:12 compute-0 sudo[100508]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrdldjxfesyuzdjksgjkjvugwnjshceo ; /usr/bin/python3'
Dec 15 10:40:12 compute-0 sudo[100508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:40:12 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:40:12] "GET /metrics HTTP/1.1" 200 46571 "" "Prometheus/2.51.0"
Dec 15 10:40:12 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:40:12] "GET /metrics HTTP/1.1" 200 46571 "" "Prometheus/2.51.0"
Dec 15 10:40:12 compute-0 python3[100510]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:40:13 compute-0 podman[100525]: 2025-12-15 10:40:13.04716591 +0000 UTC m=+0.044241454 container create 365b6395fb6dca653972c8b546477e4fe6f092ec6e2336c636abd945e07dfe41 (image=quay.io/ceph/ceph:v19, name=infallible_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:40:13 compute-0 systemd[1]: Started libpod-conmon-365b6395fb6dca653972c8b546477e4fe6f092ec6e2336c636abd945e07dfe41.scope.
Dec 15 10:40:13 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:40:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/174597884c6f0a808953bb776356449f58411917cf0d8274188bb00c01bad830/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/174597884c6f0a808953bb776356449f58411917cf0d8274188bb00c01bad830/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:13 compute-0 podman[100525]: 2025-12-15 10:40:13.026104914 +0000 UTC m=+0.023180488 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:40:13 compute-0 podman[100525]: 2025-12-15 10:40:13.132834863 +0000 UTC m=+0.129910427 container init 365b6395fb6dca653972c8b546477e4fe6f092ec6e2336c636abd945e07dfe41 (image=quay.io/ceph/ceph:v19, name=infallible_jemison, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:40:13 compute-0 podman[100525]: 2025-12-15 10:40:13.142037512 +0000 UTC m=+0.139113056 container start 365b6395fb6dca653972c8b546477e4fe6f092ec6e2336c636abd945e07dfe41 (image=quay.io/ceph/ceph:v19, name=infallible_jemison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:40:13 compute-0 podman[100525]: 2025-12-15 10:40:13.145342128 +0000 UTC m=+0.142417692 container attach 365b6395fb6dca653972c8b546477e4fe6f092ec6e2336c636abd945e07dfe41 (image=quay.io/ceph/ceph:v19, name=infallible_jemison, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 15 10:40:13 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 15 10:40:13 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:40:13 compute-0 infallible_jemison[100556]: ERROR: invalid flag --daemon-type
Dec 15 10:40:13 compute-0 systemd[1]: libpod-365b6395fb6dca653972c8b546477e4fe6f092ec6e2336c636abd945e07dfe41.scope: Deactivated successfully.
Dec 15 10:40:13 compute-0 podman[100525]: 2025-12-15 10:40:13.193076373 +0000 UTC m=+0.190151917 container died 365b6395fb6dca653972c8b546477e4fe6f092ec6e2336c636abd945e07dfe41 (image=quay.io/ceph/ceph:v19, name=infallible_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:40:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-174597884c6f0a808953bb776356449f58411917cf0d8274188bb00c01bad830-merged.mount: Deactivated successfully.
Dec 15 10:40:13 compute-0 podman[100525]: 2025-12-15 10:40:13.229144097 +0000 UTC m=+0.226219641 container remove 365b6395fb6dca653972c8b546477e4fe6f092ec6e2336c636abd945e07dfe41 (image=quay.io/ceph/ceph:v19, name=infallible_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 15 10:40:13 compute-0 systemd[1]: libpod-conmon-365b6395fb6dca653972c8b546477e4fe6f092ec6e2336c636abd945e07dfe41.scope: Deactivated successfully.
Dec 15 10:40:13 compute-0 sudo[100508]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:13 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:13 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003de0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:13 compute-0 lvm[100627]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 15 10:40:13 compute-0 lvm[100627]: VG ceph_vg0 finished
Dec 15 10:40:13 compute-0 amazing_lewin[100480]: {}
Dec 15 10:40:13 compute-0 systemd[1]: libpod-6bead8f52777481374f56c533dfb59a449645d1414d983595802df06ac78c9b3.scope: Deactivated successfully.
Dec 15 10:40:13 compute-0 systemd[1]: libpod-6bead8f52777481374f56c533dfb59a449645d1414d983595802df06ac78c9b3.scope: Consumed 1.050s CPU time.
Dec 15 10:40:13 compute-0 podman[100630]: 2025-12-15 10:40:13.457918981 +0000 UTC m=+0.025881797 container died 6bead8f52777481374f56c533dfb59a449645d1414d983595802df06ac78c9b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 15 10:40:13 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Dec 15 10:40:13 compute-0 ceph-mon[74356]: pgmap v18: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:13 compute-0 ceph-mon[74356]: 9.f scrub starts
Dec 15 10:40:13 compute-0 ceph-mon[74356]: 9.f scrub ok
Dec 15 10:40:13 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 15 10:40:13 compute-0 ceph-mon[74356]: osdmap e99: 3 total, 3 up, 3 in
Dec 15 10:40:13 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:40:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-1674c70b002785745ff35b9b5efd056c28f4c3c2d23869290f82a42b1ddff834-merged.mount: Deactivated successfully.
Dec 15 10:40:13 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Dec 15 10:40:13 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Dec 15 10:40:13 compute-0 podman[100630]: 2025-12-15 10:40:13.498799525 +0000 UTC m=+0.066762331 container remove 6bead8f52777481374f56c533dfb59a449645d1414d983595802df06ac78c9b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lewin, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 15 10:40:13 compute-0 systemd[1]: libpod-conmon-6bead8f52777481374f56c533dfb59a449645d1414d983595802df06ac78c9b3.scope: Deactivated successfully.
Dec 15 10:40:13 compute-0 sudo[100359]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:13 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:40:13 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:13 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:40:13 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:13 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Dec 15 10:40:13 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:13 compute-0 sudo[100645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 15 10:40:13 compute-0 sudo[100645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:13 compute-0 sudo[100646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 15 10:40:13 compute-0 sudo[100645]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:13 compute-0 sudo[100646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:13 compute-0 sudo[100646]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:13 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Dec 15 10:40:13 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Dec 15 10:40:13 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 15 10:40:13 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 15 10:40:13 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 15 10:40:13 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 15 10:40:13 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:40:13 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:40:13 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec 15 10:40:13 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec 15 10:40:13 compute-0 sudo[100695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:40:13 compute-0 sudo[100695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:13 compute-0 sudo[100695]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:13 compute-0 sudo[100720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:40:13 compute-0 sudo[100720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:14 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v21: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:14 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Dec 15 10:40:14 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec 15 10:40:14 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:14 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003120 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:14 compute-0 podman[100760]: 2025-12-15 10:40:14.304091866 +0000 UTC m=+0.049353344 container create b24086947464af998894538eb6ef23285ab0365229c4704b10a065e5c6ab5ead (image=quay.io/ceph/ceph:v19, name=thirsty_heisenberg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:40:14 compute-0 systemd[1]: Started libpod-conmon-b24086947464af998894538eb6ef23285ab0365229c4704b10a065e5c6ab5ead.scope.
Dec 15 10:40:14 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:40:14 compute-0 podman[100760]: 2025-12-15 10:40:14.369836536 +0000 UTC m=+0.115098004 container init b24086947464af998894538eb6ef23285ab0365229c4704b10a065e5c6ab5ead (image=quay.io/ceph/ceph:v19, name=thirsty_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 15 10:40:14 compute-0 podman[100760]: 2025-12-15 10:40:14.28404595 +0000 UTC m=+0.029307408 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:40:14 compute-0 podman[100760]: 2025-12-15 10:40:14.38123707 +0000 UTC m=+0.126498508 container start b24086947464af998894538eb6ef23285ab0365229c4704b10a065e5c6ab5ead (image=quay.io/ceph/ceph:v19, name=thirsty_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 15 10:40:14 compute-0 podman[100760]: 2025-12-15 10:40:14.384880096 +0000 UTC m=+0.130141574 container attach b24086947464af998894538eb6ef23285ab0365229c4704b10a065e5c6ab5ead (image=quay.io/ceph/ceph:v19, name=thirsty_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 15 10:40:14 compute-0 thirsty_heisenberg[100776]: 167 167
Dec 15 10:40:14 compute-0 systemd[1]: libpod-b24086947464af998894538eb6ef23285ab0365229c4704b10a065e5c6ab5ead.scope: Deactivated successfully.
Dec 15 10:40:14 compute-0 podman[100760]: 2025-12-15 10:40:14.387581165 +0000 UTC m=+0.132842603 container died b24086947464af998894538eb6ef23285ab0365229c4704b10a065e5c6ab5ead (image=quay.io/ceph/ceph:v19, name=thirsty_heisenberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:40:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-df2ddae01150deb62e9be38a401bf8ac78580a9cc33523e715d3d002edf66cb5-merged.mount: Deactivated successfully.
Dec 15 10:40:14 compute-0 podman[100760]: 2025-12-15 10:40:14.429728607 +0000 UTC m=+0.174990055 container remove b24086947464af998894538eb6ef23285ab0365229c4704b10a065e5c6ab5ead (image=quay.io/ceph/ceph:v19, name=thirsty_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 15 10:40:14 compute-0 systemd[1]: libpod-conmon-b24086947464af998894538eb6ef23285ab0365229c4704b10a065e5c6ab5ead.scope: Deactivated successfully.
Dec 15 10:40:14 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:14 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000030s ======
Dec 15 10:40:14 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:14.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 15 10:40:14 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:14 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0003ea0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:14 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:14 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:14 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:14.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:14 compute-0 sudo[100720]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:14 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Dec 15 10:40:14 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:40:14 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 15 10:40:14 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Dec 15 10:40:14 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Dec 15 10:40:14 compute-0 ceph-mon[74356]: 10.b scrub starts
Dec 15 10:40:14 compute-0 ceph-mon[74356]: 10.b scrub ok
Dec 15 10:40:14 compute-0 ceph-mon[74356]: osdmap e100: 3 total, 3 up, 3 in
Dec 15 10:40:14 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:14 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:14 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:14 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 15 10:40:14 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 15 10:40:14 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:40:14 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec 15 10:40:14 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:14 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:40:14 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:14 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.difmqj (monmap changed)...
Dec 15 10:40:14 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.difmqj (monmap changed)...
Dec 15 10:40:14 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.difmqj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 15 10:40:14 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.difmqj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 15 10:40:14 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 15 10:40:14 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 15 10:40:14 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:40:14 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:40:14 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.difmqj on compute-0
Dec 15 10:40:14 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.difmqj on compute-0
Dec 15 10:40:14 compute-0 sudo[100792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:40:14 compute-0 sudo[100792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:14 compute-0 sudo[100792]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:14 compute-0 sudo[100817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:40:14 compute-0 sudo[100817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:15 compute-0 podman[100858]: 2025-12-15 10:40:15.20192432 +0000 UTC m=+0.045409949 container create a7e22f77ced2139e8bc12dd62648d18d380c3fa51b0262f26c46ab8d3d42cd41 (image=quay.io/ceph/ceph:v19, name=nostalgic_khorana, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:40:15 compute-0 systemd[1]: Started libpod-conmon-a7e22f77ced2139e8bc12dd62648d18d380c3fa51b0262f26c46ab8d3d42cd41.scope.
Dec 15 10:40:15 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:40:15 compute-0 podman[100858]: 2025-12-15 10:40:15.27516491 +0000 UTC m=+0.118650539 container init a7e22f77ced2139e8bc12dd62648d18d380c3fa51b0262f26c46ab8d3d42cd41 (image=quay.io/ceph/ceph:v19, name=nostalgic_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True)
Dec 15 10:40:15 compute-0 podman[100858]: 2025-12-15 10:40:15.181836102 +0000 UTC m=+0.025321761 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:40:15 compute-0 podman[100858]: 2025-12-15 10:40:15.279805085 +0000 UTC m=+0.123290724 container start a7e22f77ced2139e8bc12dd62648d18d380c3fa51b0262f26c46ab8d3d42cd41 (image=quay.io/ceph/ceph:v19, name=nostalgic_khorana, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec 15 10:40:15 compute-0 nostalgic_khorana[100874]: 167 167
Dec 15 10:40:15 compute-0 podman[100858]: 2025-12-15 10:40:15.285578174 +0000 UTC m=+0.129063793 container attach a7e22f77ced2139e8bc12dd62648d18d380c3fa51b0262f26c46ab8d3d42cd41 (image=quay.io/ceph/ceph:v19, name=nostalgic_khorana, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:40:15 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:15 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:15 compute-0 systemd[1]: libpod-a7e22f77ced2139e8bc12dd62648d18d380c3fa51b0262f26c46ab8d3d42cd41.scope: Deactivated successfully.
Dec 15 10:40:15 compute-0 conmon[100874]: conmon a7e22f77ced2139e8bc1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a7e22f77ced2139e8bc12dd62648d18d380c3fa51b0262f26c46ab8d3d42cd41.scope/container/memory.events
Dec 15 10:40:15 compute-0 podman[100858]: 2025-12-15 10:40:15.287671895 +0000 UTC m=+0.131157564 container died a7e22f77ced2139e8bc12dd62648d18d380c3fa51b0262f26c46ab8d3d42cd41 (image=quay.io/ceph/ceph:v19, name=nostalgic_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 15 10:40:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-e16f644f88c06ce3f0009441b341c97672cc846df0adbfd350a02a6a0265cfaf-merged.mount: Deactivated successfully.
Dec 15 10:40:15 compute-0 podman[100858]: 2025-12-15 10:40:15.334871435 +0000 UTC m=+0.178357054 container remove a7e22f77ced2139e8bc12dd62648d18d380c3fa51b0262f26c46ab8d3d42cd41 (image=quay.io/ceph/ceph:v19, name=nostalgic_khorana, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:40:15 compute-0 systemd[1]: libpod-conmon-a7e22f77ced2139e8bc12dd62648d18d380c3fa51b0262f26c46ab8d3d42cd41.scope: Deactivated successfully.
Dec 15 10:40:15 compute-0 sudo[100817]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:15 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:40:15 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:15 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:40:15 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:15 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Dec 15 10:40:15 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Dec 15 10:40:15 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 15 10:40:15 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 15 10:40:15 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:40:15 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:40:15 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Dec 15 10:40:15 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Dec 15 10:40:15 compute-0 sudo[100891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:40:15 compute-0 sudo[100891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:15 compute-0 sudo[100891]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:15 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Dec 15 10:40:15 compute-0 ceph-mon[74356]: Reconfiguring mon.compute-0 (monmap changed)...
Dec 15 10:40:15 compute-0 ceph-mon[74356]: Reconfiguring daemon mon.compute-0 on compute-0
Dec 15 10:40:15 compute-0 ceph-mon[74356]: pgmap v21: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:15 compute-0 ceph-mon[74356]: 10.6 scrub starts
Dec 15 10:40:15 compute-0 ceph-mon[74356]: 10.6 scrub ok
Dec 15 10:40:15 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 15 10:40:15 compute-0 ceph-mon[74356]: osdmap e101: 3 total, 3 up, 3 in
Dec 15 10:40:15 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:15 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:15 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.difmqj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 15 10:40:15 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 15 10:40:15 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:40:15 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:15 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:15 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 15 10:40:15 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:40:15 compute-0 sudo[100916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:40:15 compute-0 sudo[100916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:15 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Dec 15 10:40:15 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Dec 15 10:40:15 compute-0 podman[100957]: 2025-12-15 10:40:15.81623344 +0000 UTC m=+0.035805708 container create 6d2d50abf28d56ab3f81637a561008abeb47064592fcaeae6269177c27e81a7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_shtern, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 15 10:40:15 compute-0 systemd[1]: Started libpod-conmon-6d2d50abf28d56ab3f81637a561008abeb47064592fcaeae6269177c27e81a7a.scope.
Dec 15 10:40:15 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:40:15 compute-0 podman[100957]: 2025-12-15 10:40:15.881743833 +0000 UTC m=+0.101316111 container init 6d2d50abf28d56ab3f81637a561008abeb47064592fcaeae6269177c27e81a7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_shtern, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 15 10:40:15 compute-0 podman[100957]: 2025-12-15 10:40:15.887381138 +0000 UTC m=+0.106953406 container start 6d2d50abf28d56ab3f81637a561008abeb47064592fcaeae6269177c27e81a7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_shtern, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 15 10:40:15 compute-0 podman[100957]: 2025-12-15 10:40:15.890911691 +0000 UTC m=+0.110483979 container attach 6d2d50abf28d56ab3f81637a561008abeb47064592fcaeae6269177c27e81a7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_shtern, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:40:15 compute-0 reverent_shtern[100974]: 167 167
Dec 15 10:40:15 compute-0 systemd[1]: libpod-6d2d50abf28d56ab3f81637a561008abeb47064592fcaeae6269177c27e81a7a.scope: Deactivated successfully.
Dec 15 10:40:15 compute-0 podman[100957]: 2025-12-15 10:40:15.89260128 +0000 UTC m=+0.112173568 container died 6d2d50abf28d56ab3f81637a561008abeb47064592fcaeae6269177c27e81a7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:40:15 compute-0 podman[100957]: 2025-12-15 10:40:15.800678994 +0000 UTC m=+0.020251282 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:40:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b8059549b5f12e573ef938af88e4a11a7d2438c2ebdeed7cd06f3b21571888a-merged.mount: Deactivated successfully.
Dec 15 10:40:15 compute-0 podman[100957]: 2025-12-15 10:40:15.925334617 +0000 UTC m=+0.144906885 container remove 6d2d50abf28d56ab3f81637a561008abeb47064592fcaeae6269177c27e81a7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_shtern, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 15 10:40:15 compute-0 systemd[1]: libpod-conmon-6d2d50abf28d56ab3f81637a561008abeb47064592fcaeae6269177c27e81a7a.scope: Deactivated successfully.
Dec 15 10:40:15 compute-0 sudo[100916]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:15 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:40:15 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:15 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:40:16 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v24: 353 pgs: 1 remapped+peering, 2 peering, 350 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:16 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:16 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Dec 15 10:40:16 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Dec 15 10:40:16 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec 15 10:40:16 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 15 10:40:16 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:40:16 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:40:16 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-0
Dec 15 10:40:16 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-0
Dec 15 10:40:16 compute-0 sudo[100992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:40:16 compute-0 sudo[100992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:16 compute-0 sudo[100992]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:16 compute-0 sudo[101017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:40:16 compute-0 sudo[101017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:16 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:16 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003e00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:16 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:16 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:16 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003e00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:16 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:16 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:16.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:16 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:16 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:16 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:16.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:16 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Dec 15 10:40:16 compute-0 podman[101060]: 2025-12-15 10:40:16.589128503 +0000 UTC m=+0.040051222 container create a97fdba5e4991e2e0e4b448ad73248bdece15ff9f6ca6a33e2dad040ce7be3c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_payne, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 15 10:40:16 compute-0 systemd[1]: Started libpod-conmon-a97fdba5e4991e2e0e4b448ad73248bdece15ff9f6ca6a33e2dad040ce7be3c1.scope.
Dec 15 10:40:16 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:40:16 compute-0 podman[101060]: 2025-12-15 10:40:16.660972942 +0000 UTC m=+0.111895651 container init a97fdba5e4991e2e0e4b448ad73248bdece15ff9f6ca6a33e2dad040ce7be3c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_payne, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:40:16 compute-0 podman[101060]: 2025-12-15 10:40:16.666838723 +0000 UTC m=+0.117761422 container start a97fdba5e4991e2e0e4b448ad73248bdece15ff9f6ca6a33e2dad040ce7be3c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:40:16 compute-0 podman[101060]: 2025-12-15 10:40:16.572440094 +0000 UTC m=+0.023362823 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:40:16 compute-0 sad_payne[101077]: 167 167
Dec 15 10:40:16 compute-0 podman[101060]: 2025-12-15 10:40:16.670036507 +0000 UTC m=+0.120959226 container attach a97fdba5e4991e2e0e4b448ad73248bdece15ff9f6ca6a33e2dad040ce7be3c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_payne, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 15 10:40:16 compute-0 systemd[1]: libpod-a97fdba5e4991e2e0e4b448ad73248bdece15ff9f6ca6a33e2dad040ce7be3c1.scope: Deactivated successfully.
Dec 15 10:40:16 compute-0 podman[101060]: 2025-12-15 10:40:16.670497269 +0000 UTC m=+0.121419988 container died a97fdba5e4991e2e0e4b448ad73248bdece15ff9f6ca6a33e2dad040ce7be3c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 15 10:40:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b32d11cd976a0888d9f9bef6fe6880193f2ac3de0b7883460dfc258dce0c85c-merged.mount: Deactivated successfully.
Dec 15 10:40:16 compute-0 podman[101060]: 2025-12-15 10:40:16.705561915 +0000 UTC m=+0.156484624 container remove a97fdba5e4991e2e0e4b448ad73248bdece15ff9f6ca6a33e2dad040ce7be3c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_payne, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 15 10:40:16 compute-0 systemd[1]: libpod-conmon-a97fdba5e4991e2e0e4b448ad73248bdece15ff9f6ca6a33e2dad040ce7be3c1.scope: Deactivated successfully.
Dec 15 10:40:16 compute-0 ceph-mon[74356]: Reconfiguring mgr.compute-0.difmqj (monmap changed)...
Dec 15 10:40:16 compute-0 ceph-mon[74356]: Reconfiguring daemon mgr.compute-0.difmqj on compute-0
Dec 15 10:40:16 compute-0 ceph-mon[74356]: 10.1a scrub starts
Dec 15 10:40:16 compute-0 ceph-mon[74356]: 10.1a scrub ok
Dec 15 10:40:16 compute-0 ceph-mon[74356]: Reconfiguring crash.compute-0 (monmap changed)...
Dec 15 10:40:16 compute-0 ceph-mon[74356]: Reconfiguring daemon crash.compute-0 on compute-0
Dec 15 10:40:16 compute-0 ceph-mon[74356]: osdmap e102: 3 total, 3 up, 3 in
Dec 15 10:40:16 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:16 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:16 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 15 10:40:16 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:40:16 compute-0 sudo[101017]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:16 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:40:16 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Dec 15 10:40:16 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Dec 15 10:40:16 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:16 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:40:17 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:17 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring node-exporter.compute-0 (unknown last config time)...
Dec 15 10:40:17 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring node-exporter.compute-0 (unknown last config time)...
Dec 15 10:40:17 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring daemon node-exporter.compute-0 on compute-0
Dec 15 10:40:17 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring daemon node-exporter.compute-0 on compute-0
Dec 15 10:40:17 compute-0 sudo[101100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:40:17 compute-0 sudo[101100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:17 compute-0 sudo[101100]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:17 compute-0 sudo[101125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:40:17 compute-0 sudo[101125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:17 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:17 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0003ea0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:17 compute-0 systemd[1]: Stopping Ceph node-exporter.compute-0 for 77365f67-614e-5a8d-b658-640395550c79...
Dec 15 10:40:17 compute-0 podman[101199]: 2025-12-15 10:40:17.58805297 +0000 UTC m=+0.049109696 container died af7cec967afbf19ae8aa93bce2194c8b8c2dd9c500f2d7f44ece310d3a1d4cc1 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0d57fa9d9a894903ec32a7ea283c5a1e6ecadc100bd5a34536df4442b391b3d-merged.mount: Deactivated successfully.
Dec 15 10:40:17 compute-0 podman[101199]: 2025-12-15 10:40:17.629014267 +0000 UTC m=+0.090070993 container remove af7cec967afbf19ae8aa93bce2194c8b8c2dd9c500f2d7f44ece310d3a1d4cc1 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:17 compute-0 bash[101199]: ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0
Dec 15 10:40:17 compute-0 systemd[1]: ceph-77365f67-614e-5a8d-b658-640395550c79@node-exporter.compute-0.service: Main process exited, code=exited, status=143/n/a
Dec 15 10:40:17 compute-0 ceph-mon[74356]: pgmap v24: 353 pgs: 1 remapped+peering, 2 peering, 350 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:17 compute-0 ceph-mon[74356]: 10.7 scrub starts
Dec 15 10:40:17 compute-0 ceph-mon[74356]: 10.7 scrub ok
Dec 15 10:40:17 compute-0 ceph-mon[74356]: Reconfiguring osd.0 (monmap changed)...
Dec 15 10:40:17 compute-0 ceph-mon[74356]: Reconfiguring daemon osd.0 on compute-0
Dec 15 10:40:17 compute-0 ceph-mon[74356]: osdmap e103: 3 total, 3 up, 3 in
Dec 15 10:40:17 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:17 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:17 compute-0 ceph-mon[74356]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Dec 15 10:40:17 compute-0 ceph-mon[74356]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Dec 15 10:40:17 compute-0 systemd[1]: ceph-77365f67-614e-5a8d-b658-640395550c79@node-exporter.compute-0.service: Failed with result 'exit-code'.
Dec 15 10:40:17 compute-0 systemd[1]: Stopped Ceph node-exporter.compute-0 for 77365f67-614e-5a8d-b658-640395550c79.
Dec 15 10:40:17 compute-0 systemd[1]: ceph-77365f67-614e-5a8d-b658-640395550c79@node-exporter.compute-0.service: Consumed 2.093s CPU time.
Dec 15 10:40:17 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for 77365f67-614e-5a8d-b658-640395550c79...
Dec 15 10:40:18 compute-0 podman[101297]: 2025-12-15 10:40:18.028936472 +0000 UTC m=+0.043530083 container create 4840dadc82f34f3188660955046170e50347603e400e5beeba87756514ef7863 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:18 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v26: 353 pgs: 1 remapped+peering, 2 peering, 350 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c4bf92a89b8670718991d3ee4de1337213d22bc33c8a15922d6f73ed3bbddb3/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:18 compute-0 podman[101297]: 2025-12-15 10:40:18.087454042 +0000 UTC m=+0.102047653 container init 4840dadc82f34f3188660955046170e50347603e400e5beeba87756514ef7863 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:18 compute-0 podman[101297]: 2025-12-15 10:40:18.092942152 +0000 UTC m=+0.107535763 container start 4840dadc82f34f3188660955046170e50347603e400e5beeba87756514ef7863 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:18 compute-0 bash[101297]: 4840dadc82f34f3188660955046170e50347603e400e5beeba87756514ef7863
Dec 15 10:40:18 compute-0 podman[101297]: 2025-12-15 10:40:18.010655978 +0000 UTC m=+0.025249619 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.099Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.099Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.099Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.099Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.100Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.100Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.100Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.100Z caller=node_exporter.go:117 level=info collector=arp
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.100Z caller=node_exporter.go:117 level=info collector=bcache
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.100Z caller=node_exporter.go:117 level=info collector=bonding
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.100Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.100Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.100Z caller=node_exporter.go:117 level=info collector=cpu
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.100Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.100Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.100Z caller=node_exporter.go:117 level=info collector=dmi
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.100Z caller=node_exporter.go:117 level=info collector=edac
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.100Z caller=node_exporter.go:117 level=info collector=entropy
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.100Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.100Z caller=node_exporter.go:117 level=info collector=filefd
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.100Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.100Z caller=node_exporter.go:117 level=info collector=hwmon
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=netclass
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=netdev
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=netstat
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=nfs
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=nvme
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=os
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=pressure
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=rapl
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=selinux
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=softnet
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=stat
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=textfile
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=thermal_zone
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=time
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=uname
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=xfs
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=node_exporter.go:117 level=info collector=zfs
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0[101312]: ts=2025-12-15T10:40:18.101Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Dec 15 10:40:18 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for 77365f67-614e-5a8d-b658-640395550c79.
Dec 15 10:40:18 compute-0 sudo[101125]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:18 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:40:18 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:18 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:40:18 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:18 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring alertmanager.compute-0 (dependencies changed)...
Dec 15 10:40:18 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring alertmanager.compute-0 (dependencies changed)...
Dec 15 10:40:18 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring daemon alertmanager.compute-0 on compute-0
Dec 15 10:40:18 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring daemon alertmanager.compute-0 on compute-0
Dec 15 10:40:18 compute-0 sudo[101321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:40:18 compute-0 sudo[101321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:18 compute-0 sudo[101321]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:18 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:18 compute-0 sudo[101346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:40:18 compute-0 sudo[101346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:18 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:18 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:18 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:18.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:18 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003a40 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:18 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:18 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:18 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:18.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:18 compute-0 podman[101388]: 2025-12-15 10:40:18.630694035 +0000 UTC m=+0.034517599 volume create 3e0bfbaad063ac51720f55eda7982c16c9aab22f745f33fb75427d62a20f622c
Dec 15 10:40:18 compute-0 podman[101388]: 2025-12-15 10:40:18.642810049 +0000 UTC m=+0.046633653 container create 0b49acd45521840ba2bbb2c2957c7013eeecfcd33cc23563c5e141abffae3ad5 (image=quay.io/prometheus/alertmanager:v0.25.0, name=amazing_kare, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:18 compute-0 systemd[1]: Started libpod-conmon-0b49acd45521840ba2bbb2c2957c7013eeecfcd33cc23563c5e141abffae3ad5.scope.
Dec 15 10:40:18 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:40:18 compute-0 podman[101388]: 2025-12-15 10:40:18.617947043 +0000 UTC m=+0.021770637 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec 15 10:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd53182edc0cb72a4c322669d4a1226835c7090df94905dd2406cbf131512245/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:18 compute-0 podman[101388]: 2025-12-15 10:40:18.728475442 +0000 UTC m=+0.132299056 container init 0b49acd45521840ba2bbb2c2957c7013eeecfcd33cc23563c5e141abffae3ad5 (image=quay.io/prometheus/alertmanager:v0.25.0, name=amazing_kare, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:18 compute-0 podman[101388]: 2025-12-15 10:40:18.735388524 +0000 UTC m=+0.139212098 container start 0b49acd45521840ba2bbb2c2957c7013eeecfcd33cc23563c5e141abffae3ad5 (image=quay.io/prometheus/alertmanager:v0.25.0, name=amazing_kare, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:18 compute-0 amazing_kare[101404]: 65534 65534
Dec 15 10:40:18 compute-0 podman[101388]: 2025-12-15 10:40:18.738396302 +0000 UTC m=+0.142219936 container attach 0b49acd45521840ba2bbb2c2957c7013eeecfcd33cc23563c5e141abffae3ad5 (image=quay.io/prometheus/alertmanager:v0.25.0, name=amazing_kare, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:18 compute-0 systemd[1]: libpod-0b49acd45521840ba2bbb2c2957c7013eeecfcd33cc23563c5e141abffae3ad5.scope: Deactivated successfully.
Dec 15 10:40:18 compute-0 conmon[101404]: conmon 0b49acd45521840ba2bb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0b49acd45521840ba2bbb2c2957c7013eeecfcd33cc23563c5e141abffae3ad5.scope/container/memory.events
Dec 15 10:40:18 compute-0 podman[101388]: 2025-12-15 10:40:18.740241156 +0000 UTC m=+0.144064740 container died 0b49acd45521840ba2bbb2c2957c7013eeecfcd33cc23563c5e141abffae3ad5 (image=quay.io/prometheus/alertmanager:v0.25.0, name=amazing_kare, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd53182edc0cb72a4c322669d4a1226835c7090df94905dd2406cbf131512245-merged.mount: Deactivated successfully.
Dec 15 10:40:18 compute-0 podman[101388]: 2025-12-15 10:40:18.779796041 +0000 UTC m=+0.183619625 container remove 0b49acd45521840ba2bbb2c2957c7013eeecfcd33cc23563c5e141abffae3ad5 (image=quay.io/prometheus/alertmanager:v0.25.0, name=amazing_kare, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:18 compute-0 podman[101388]: 2025-12-15 10:40:18.784172389 +0000 UTC m=+0.187995983 volume remove 3e0bfbaad063ac51720f55eda7982c16c9aab22f745f33fb75427d62a20f622c
Dec 15 10:40:18 compute-0 systemd[1]: libpod-conmon-0b49acd45521840ba2bbb2c2957c7013eeecfcd33cc23563c5e141abffae3ad5.scope: Deactivated successfully.
Dec 15 10:40:18 compute-0 podman[101421]: 2025-12-15 10:40:18.859544182 +0000 UTC m=+0.050854767 volume create 32920c230d390ce05d7753a8c2486875053126944b5fbfd0372d0457ce5b98fd
Dec 15 10:40:18 compute-0 podman[101421]: 2025-12-15 10:40:18.867973808 +0000 UTC m=+0.059284393 container create 79a56d0eb97e0cd617d2235dafd040805d451bde2c983b1959a0d09596de1297 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ecstatic_nightingale, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:18 compute-0 systemd[1]: Started libpod-conmon-79a56d0eb97e0cd617d2235dafd040805d451bde2c983b1959a0d09596de1297.scope.
Dec 15 10:40:18 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/853e1424941987b3cda9b041ed2e3065a14e13d4b0e1e50a2d49f45632893fab/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:18 compute-0 podman[101421]: 2025-12-15 10:40:18.9306665 +0000 UTC m=+0.121977075 container init 79a56d0eb97e0cd617d2235dafd040805d451bde2c983b1959a0d09596de1297 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ecstatic_nightingale, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:18 compute-0 podman[101421]: 2025-12-15 10:40:18.841721591 +0000 UTC m=+0.033032216 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec 15 10:40:18 compute-0 podman[101421]: 2025-12-15 10:40:18.935527291 +0000 UTC m=+0.126837876 container start 79a56d0eb97e0cd617d2235dafd040805d451bde2c983b1959a0d09596de1297 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ecstatic_nightingale, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:18 compute-0 ecstatic_nightingale[101437]: 65534 65534
Dec 15 10:40:18 compute-0 systemd[1]: libpod-79a56d0eb97e0cd617d2235dafd040805d451bde2c983b1959a0d09596de1297.scope: Deactivated successfully.
Dec 15 10:40:18 compute-0 conmon[101437]: conmon 79a56d0eb97e0cd617d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-79a56d0eb97e0cd617d2235dafd040805d451bde2c983b1959a0d09596de1297.scope/container/memory.events
Dec 15 10:40:18 compute-0 podman[101421]: 2025-12-15 10:40:18.939112136 +0000 UTC m=+0.130422721 container attach 79a56d0eb97e0cd617d2235dafd040805d451bde2c983b1959a0d09596de1297 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ecstatic_nightingale, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:18 compute-0 podman[101421]: 2025-12-15 10:40:18.939413005 +0000 UTC m=+0.130723590 container died 79a56d0eb97e0cd617d2235dafd040805d451bde2c983b1959a0d09596de1297 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ecstatic_nightingale, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-853e1424941987b3cda9b041ed2e3065a14e13d4b0e1e50a2d49f45632893fab-merged.mount: Deactivated successfully.
Dec 15 10:40:18 compute-0 podman[101421]: 2025-12-15 10:40:18.974025137 +0000 UTC m=+0.165335722 container remove 79a56d0eb97e0cd617d2235dafd040805d451bde2c983b1959a0d09596de1297 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ecstatic_nightingale, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:18 compute-0 podman[101421]: 2025-12-15 10:40:18.978223409 +0000 UTC m=+0.169534024 volume remove 32920c230d390ce05d7753a8c2486875053126944b5fbfd0372d0457ce5b98fd
Dec 15 10:40:18 compute-0 systemd[1]: libpod-conmon-79a56d0eb97e0cd617d2235dafd040805d451bde2c983b1959a0d09596de1297.scope: Deactivated successfully.
Dec 15 10:40:19 compute-0 systemd[1]: Stopping Ceph alertmanager.compute-0 for 77365f67-614e-5a8d-b658-640395550c79...
Dec 15 10:40:19 compute-0 ceph-mon[74356]: pgmap v26: 353 pgs: 1 remapped+peering, 2 peering, 350 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:19 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:19 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:19 compute-0 ceph-mon[74356]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Dec 15 10:40:19 compute-0 ceph-mon[74356]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Dec 15 10:40:19 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0[95984]: ts=2025-12-15T10:40:19.204Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..."
Dec 15 10:40:19 compute-0 podman[101485]: 2025-12-15 10:40:19.213628977 +0000 UTC m=+0.044106739 container died 73244696e42fd889cd12e816ba79afefa66014c2436355b7b29fda12a488ec50 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff2488e520edebdf17f614aad49dc9b8302542ba4f6e68f6bc26fc0bd6279d00-merged.mount: Deactivated successfully.
Dec 15 10:40:19 compute-0 podman[101485]: 2025-12-15 10:40:19.248483866 +0000 UTC m=+0.078961628 container remove 73244696e42fd889cd12e816ba79afefa66014c2436355b7b29fda12a488ec50 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:19 compute-0 podman[101485]: 2025-12-15 10:40:19.251756451 +0000 UTC m=+0.082234213 volume remove e2dba4d19103cd02742ca3e5c71c8a51018ce58965edfd5db577cbc2a9d2132e
Dec 15 10:40:19 compute-0 bash[101485]: ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0
Dec 15 10:40:19 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:19 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003e20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:19 compute-0 systemd[1]: ceph-77365f67-614e-5a8d-b658-640395550c79@alertmanager.compute-0.service: Deactivated successfully.
Dec 15 10:40:19 compute-0 systemd[1]: Stopped Ceph alertmanager.compute-0 for 77365f67-614e-5a8d-b658-640395550c79.
Dec 15 10:40:19 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for 77365f67-614e-5a8d-b658-640395550c79...
Dec 15 10:40:19 compute-0 podman[101588]: 2025-12-15 10:40:19.583544346 +0000 UTC m=+0.035855668 volume create e151a4be9d01accf64fbcadb224db648599b56564d5727023e1dc5ba29f51530
Dec 15 10:40:19 compute-0 podman[101588]: 2025-12-15 10:40:19.590985843 +0000 UTC m=+0.043297175 container create 89a20608330788e4c3363bfd799e4feb78e7548f505117a7fc750d8eecfef10f (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93c461bd985551e7d55fc9cc8a8a64701b16d12609f4170558cf64272bb79b29/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93c461bd985551e7d55fc9cc8a8a64701b16d12609f4170558cf64272bb79b29/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:19 compute-0 podman[101588]: 2025-12-15 10:40:19.646028072 +0000 UTC m=+0.098339424 container init 89a20608330788e4c3363bfd799e4feb78e7548f505117a7fc750d8eecfef10f (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:19 compute-0 podman[101588]: 2025-12-15 10:40:19.651787271 +0000 UTC m=+0.104098603 container start 89a20608330788e4c3363bfd799e4feb78e7548f505117a7fc750d8eecfef10f (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:19 compute-0 bash[101588]: 89a20608330788e4c3363bfd799e4feb78e7548f505117a7fc750d8eecfef10f
Dec 15 10:40:19 compute-0 podman[101588]: 2025-12-15 10:40:19.567744345 +0000 UTC m=+0.020055707 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec 15 10:40:19 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for 77365f67-614e-5a8d-b658-640395550c79.
Dec 15 10:40:19 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0[101604]: ts=2025-12-15T10:40:19.677Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Dec 15 10:40:19 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0[101604]: ts=2025-12-15T10:40:19.677Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Dec 15 10:40:19 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0[101604]: ts=2025-12-15T10:40:19.685Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Dec 15 10:40:19 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0[101604]: ts=2025-12-15T10:40:19.686Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Dec 15 10:40:19 compute-0 sudo[101346]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:19 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:40:19 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0[101604]: ts=2025-12-15T10:40:19.730Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Dec 15 10:40:19 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0[101604]: ts=2025-12-15T10:40:19.730Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Dec 15 10:40:19 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0[101604]: ts=2025-12-15T10:40:19.734Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Dec 15 10:40:19 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0[101604]: ts=2025-12-15T10:40:19.734Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Dec 15 10:40:19 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:19 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:40:19 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:19 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring grafana.compute-0 (dependencies changed)...
Dec 15 10:40:19 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring grafana.compute-0 (dependencies changed)...
Dec 15 10:40:19 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring daemon grafana.compute-0 on compute-0
Dec 15 10:40:19 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring daemon grafana.compute-0 on compute-0
Dec 15 10:40:19 compute-0 sudo[101625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:40:19 compute-0 sudo[101625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:19 compute-0 sudo[101625]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:19 compute-0 sudo[101650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid 77365f67-614e-5a8d-b658-640395550c79
Dec 15 10:40:19 compute-0 sudo[101650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:20 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v27: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:20 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Dec 15 10:40:20 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec 15 10:40:20 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:20 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0003ec0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:20 compute-0 podman[101691]: 2025-12-15 10:40:20.381029668 +0000 UTC m=+0.051312171 container create ea6ceac92b18ed75f5f18a043e6e79073e46a8d75bd8b1c65524f8fece458bd7 (image=quay.io/ceph/grafana:10.4.0, name=quizzical_franklin, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:40:20 compute-0 systemd[1]: Started libpod-conmon-ea6ceac92b18ed75f5f18a043e6e79073e46a8d75bd8b1c65524f8fece458bd7.scope.
Dec 15 10:40:20 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:40:20 compute-0 podman[101691]: 2025-12-15 10:40:20.354593796 +0000 UTC m=+0.024876349 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec 15 10:40:20 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:20 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:20 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:20.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:20 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:20 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0003ec0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:20 compute-0 podman[101691]: 2025-12-15 10:40:20.452918178 +0000 UTC m=+0.123200691 container init ea6ceac92b18ed75f5f18a043e6e79073e46a8d75bd8b1c65524f8fece458bd7 (image=quay.io/ceph/grafana:10.4.0, name=quizzical_franklin, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:40:20 compute-0 podman[101691]: 2025-12-15 10:40:20.460233352 +0000 UTC m=+0.130515855 container start ea6ceac92b18ed75f5f18a043e6e79073e46a8d75bd8b1c65524f8fece458bd7 (image=quay.io/ceph/grafana:10.4.0, name=quizzical_franklin, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:40:20 compute-0 podman[101691]: 2025-12-15 10:40:20.463506008 +0000 UTC m=+0.133788511 container attach ea6ceac92b18ed75f5f18a043e6e79073e46a8d75bd8b1c65524f8fece458bd7 (image=quay.io/ceph/grafana:10.4.0, name=quizzical_franklin, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:40:20 compute-0 quizzical_franklin[101707]: 472 0
Dec 15 10:40:20 compute-0 systemd[1]: libpod-ea6ceac92b18ed75f5f18a043e6e79073e46a8d75bd8b1c65524f8fece458bd7.scope: Deactivated successfully.
Dec 15 10:40:20 compute-0 podman[101691]: 2025-12-15 10:40:20.464918859 +0000 UTC m=+0.135201382 container died ea6ceac92b18ed75f5f18a043e6e79073e46a8d75bd8b1c65524f8fece458bd7 (image=quay.io/ceph/grafana:10.4.0, name=quizzical_franklin, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:40:20 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:20 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 15 10:40:20 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:20.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 15 10:40:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b699a89a62e57de31d8275e4b3de166cc5d0f4ef1154d2023eec6be154a8699-merged.mount: Deactivated successfully.
Dec 15 10:40:20 compute-0 podman[101691]: 2025-12-15 10:40:20.501925161 +0000 UTC m=+0.172207694 container remove ea6ceac92b18ed75f5f18a043e6e79073e46a8d75bd8b1c65524f8fece458bd7 (image=quay.io/ceph/grafana:10.4.0, name=quizzical_franklin, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:40:20 compute-0 systemd[1]: libpod-conmon-ea6ceac92b18ed75f5f18a043e6e79073e46a8d75bd8b1c65524f8fece458bd7.scope: Deactivated successfully.
Dec 15 10:40:20 compute-0 podman[101724]: 2025-12-15 10:40:20.576030125 +0000 UTC m=+0.051138774 container create 236c4364602327f54774588a723023da6531798fa00c98af9bbed70fec096abf (image=quay.io/ceph/grafana:10.4.0, name=thirsty_gauss, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:40:20 compute-0 systemd[1]: Started libpod-conmon-236c4364602327f54774588a723023da6531798fa00c98af9bbed70fec096abf.scope.
Dec 15 10:40:20 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:40:20 compute-0 podman[101724]: 2025-12-15 10:40:20.554910329 +0000 UTC m=+0.030019018 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec 15 10:40:20 compute-0 podman[101724]: 2025-12-15 10:40:20.651688006 +0000 UTC m=+0.126796705 container init 236c4364602327f54774588a723023da6531798fa00c98af9bbed70fec096abf (image=quay.io/ceph/grafana:10.4.0, name=thirsty_gauss, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:40:20 compute-0 podman[101724]: 2025-12-15 10:40:20.657829816 +0000 UTC m=+0.132938515 container start 236c4364602327f54774588a723023da6531798fa00c98af9bbed70fec096abf (image=quay.io/ceph/grafana:10.4.0, name=thirsty_gauss, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:40:20 compute-0 thirsty_gauss[101740]: 472 0
Dec 15 10:40:20 compute-0 systemd[1]: libpod-236c4364602327f54774588a723023da6531798fa00c98af9bbed70fec096abf.scope: Deactivated successfully.
Dec 15 10:40:20 compute-0 conmon[101740]: conmon 236c4364602327f54774 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-236c4364602327f54774588a723023da6531798fa00c98af9bbed70fec096abf.scope/container/memory.events
Dec 15 10:40:20 compute-0 podman[101724]: 2025-12-15 10:40:20.66239451 +0000 UTC m=+0.137503199 container attach 236c4364602327f54774588a723023da6531798fa00c98af9bbed70fec096abf (image=quay.io/ceph/grafana:10.4.0, name=thirsty_gauss, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:40:20 compute-0 podman[101724]: 2025-12-15 10:40:20.662721608 +0000 UTC m=+0.137830297 container died 236c4364602327f54774588a723023da6531798fa00c98af9bbed70fec096abf (image=quay.io/ceph/grafana:10.4.0, name=thirsty_gauss, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:40:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-e45960472d22837fb6da8bfbf41e9c565a4f60fecde364dc5a2b0510ac3f81ea-merged.mount: Deactivated successfully.
Dec 15 10:40:20 compute-0 podman[101724]: 2025-12-15 10:40:20.709906778 +0000 UTC m=+0.185015457 container remove 236c4364602327f54774588a723023da6531798fa00c98af9bbed70fec096abf (image=quay.io/ceph/grafana:10.4.0, name=thirsty_gauss, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:40:20 compute-0 systemd[1]: libpod-conmon-236c4364602327f54774588a723023da6531798fa00c98af9bbed70fec096abf.scope: Deactivated successfully.
Dec 15 10:40:20 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:20 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:20 compute-0 ceph-mon[74356]: Reconfiguring grafana.compute-0 (dependencies changed)...
Dec 15 10:40:20 compute-0 ceph-mon[74356]: Reconfiguring daemon grafana.compute-0 on compute-0
Dec 15 10:40:20 compute-0 ceph-mon[74356]: pgmap v27: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:20 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec 15 10:40:20 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Dec 15 10:40:20 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 15 10:40:20 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Dec 15 10:40:20 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Dec 15 10:40:20 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 104 pg[10.12( v 54'1067 (0'0,54'1067] local-lis/les=68/69 n=4 ec=59/47 lis/c=68/68 les/c/f=69/69/0 sis=104 pruub=14.569835663s) [2] r=-1 lpr=104 pi=[68,104)/1 crt=54'1067 mlcod 0'0 active pruub 235.044097900s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:40:20 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 104 pg[10.12( v 54'1067 (0'0,54'1067] local-lis/les=68/69 n=4 ec=59/47 lis/c=68/68 les/c/f=69/69/0 sis=104 pruub=14.569537163s) [2] r=-1 lpr=104 pi=[68,104)/1 crt=54'1067 mlcod 0'0 unknown NOTIFY pruub 235.044097900s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:40:20 compute-0 systemd[1]: Stopping Ceph grafana.compute-0 for 77365f67-614e-5a8d-b658-640395550c79...
Dec 15 10:40:20 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=infra.usagestats t=2025-12-15T10:40:20.950658222Z level=info msg="Usage stats are ready to report"
Dec 15 10:40:20 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=server t=2025-12-15T10:40:20.9974683Z level=info msg="Shutdown started" reason="System signal: terminated"
Dec 15 10:40:20 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=ticker t=2025-12-15T10:40:20.997520811Z level=info msg=stopped last_tick=2025-12-15T10:40:20Z
Dec 15 10:40:20 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=tracing t=2025-12-15T10:40:20.997781959Z level=info msg="Closing tracing"
Dec 15 10:40:20 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=grafana-apiserver t=2025-12-15T10:40:20.997967294Z level=info msg="StorageObjectCountTracker pruner is exiting"
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[96586]: logger=sqlstore.transactions t=2025-12-15T10:40:21.009337887Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Dec 15 10:40:21 compute-0 podman[101788]: 2025-12-15 10:40:21.028388363 +0000 UTC m=+0.069673746 container died 23d65a61651e1c0b44eb1273a6940cc9a6f8c6b86de8d6db3e4162bb6add8e05 (image=quay.io/ceph/grafana:10.4.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:40:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c4623f86540b7075657d1cfaedf01c2719b281ffe7620ca0d63bd363343d6fe-merged.mount: Deactivated successfully.
Dec 15 10:40:21 compute-0 podman[101788]: 2025-12-15 10:40:21.080291649 +0000 UTC m=+0.121577052 container remove 23d65a61651e1c0b44eb1273a6940cc9a6f8c6b86de8d6db3e4162bb6add8e05 (image=quay.io/ceph/grafana:10.4.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:40:21 compute-0 bash[101788]: ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0
Dec 15 10:40:21 compute-0 systemd[1]: ceph-77365f67-614e-5a8d-b658-640395550c79@grafana.compute-0.service: Deactivated successfully.
Dec 15 10:40:21 compute-0 systemd[1]: Stopped Ceph grafana.compute-0 for 77365f67-614e-5a8d-b658-640395550c79.
Dec 15 10:40:21 compute-0 systemd[1]: ceph-77365f67-614e-5a8d-b658-640395550c79@grafana.compute-0.service: Consumed 4.168s CPU time.
Dec 15 10:40:21 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for 77365f67-614e-5a8d-b658-640395550c79...
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:21 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003a40 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:21 compute-0 podman[101895]: 2025-12-15 10:40:21.472630893 +0000 UTC m=+0.048040515 container create 904de0ff1c362d9a2eedc2ea5dd43957c068393257172a115a9781fe3109347c (image=quay.io/ceph/grafana:10.4.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:40:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bef5f116f823aa77719c8c9a94f331b766de9f5af1e57620ad7347f0c108a89/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bef5f116f823aa77719c8c9a94f331b766de9f5af1e57620ad7347f0c108a89/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bef5f116f823aa77719c8c9a94f331b766de9f5af1e57620ad7347f0c108a89/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bef5f116f823aa77719c8c9a94f331b766de9f5af1e57620ad7347f0c108a89/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bef5f116f823aa77719c8c9a94f331b766de9f5af1e57620ad7347f0c108a89/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:21 compute-0 podman[101895]: 2025-12-15 10:40:21.525394655 +0000 UTC m=+0.100804307 container init 904de0ff1c362d9a2eedc2ea5dd43957c068393257172a115a9781fe3109347c (image=quay.io/ceph/grafana:10.4.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:40:21 compute-0 podman[101895]: 2025-12-15 10:40:21.53103604 +0000 UTC m=+0.106445662 container start 904de0ff1c362d9a2eedc2ea5dd43957c068393257172a115a9781fe3109347c (image=quay.io/ceph/grafana:10.4.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:40:21 compute-0 bash[101895]: 904de0ff1c362d9a2eedc2ea5dd43957c068393257172a115a9781fe3109347c
Dec 15 10:40:21 compute-0 podman[101895]: 2025-12-15 10:40:21.450360253 +0000 UTC m=+0.025769905 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec 15 10:40:21 compute-0 systemd[1]: Started Ceph grafana.compute-0 for 77365f67-614e-5a8d-b658-640395550c79.
Dec 15 10:40:21 compute-0 sudo[101650]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:40:21 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:40:21 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:21 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Dec 15 10:40:21 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Dec 15 10:40:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 15 10:40:21 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 15 10:40:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:40:21 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:40:21 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Dec 15 10:40:21 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0[101604]: ts=2025-12-15T10:40:21.687Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000595125s
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=settings t=2025-12-15T10:40:21.728733017Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-12-15T10:40:21Z
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=settings t=2025-12-15T10:40:21.729019055Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=settings t=2025-12-15T10:40:21.729025975Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=settings t=2025-12-15T10:40:21.729030356Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=settings t=2025-12-15T10:40:21.729034036Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=settings t=2025-12-15T10:40:21.729037306Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=settings t=2025-12-15T10:40:21.729040586Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=settings t=2025-12-15T10:40:21.729044096Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=settings t=2025-12-15T10:40:21.729047996Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=settings t=2025-12-15T10:40:21.729051676Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=settings t=2025-12-15T10:40:21.729055016Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=settings t=2025-12-15T10:40:21.729058526Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=settings t=2025-12-15T10:40:21.729061876Z level=info msg=Target target=[all]
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=settings t=2025-12-15T10:40:21.729068427Z level=info msg="Path Home" path=/usr/share/grafana
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=settings t=2025-12-15T10:40:21.729071697Z level=info msg="Path Data" path=/var/lib/grafana
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=settings t=2025-12-15T10:40:21.729074987Z level=info msg="Path Logs" path=/var/log/grafana
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=settings t=2025-12-15T10:40:21.729078117Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=settings t=2025-12-15T10:40:21.729081507Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=settings t=2025-12-15T10:40:21.729084757Z level=info msg="App mode production"
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=sqlstore t=2025-12-15T10:40:21.729414056Z level=info msg="Connecting to DB" dbtype=sqlite3
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=sqlstore t=2025-12-15T10:40:21.729435887Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=migrator t=2025-12-15T10:40:21.729993273Z level=info msg="Starting DB migrations"
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=migrator t=2025-12-15T10:40:21.746931408Z level=info msg="migrations completed" performed=0 skipped=547 duration=520.195µs
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=sqlstore t=2025-12-15T10:40:21.747783243Z level=info msg="Created default organization"
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=secrets t=2025-12-15T10:40:21.748208106Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=plugin.store t=2025-12-15T10:40:21.769220059Z level=info msg="Loading plugins..."
Dec 15 10:40:21 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 15 10:40:21 compute-0 ceph-mon[74356]: osdmap e104: 3 total, 3 up, 3 in
Dec 15 10:40:21 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:21 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:21 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 15 10:40:21 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:40:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Dec 15 10:40:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Dec 15 10:40:21 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Dec 15 10:40:21 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 105 pg[10.12( v 54'1067 (0'0,54'1067] local-lis/les=68/69 n=4 ec=59/47 lis/c=68/68 les/c/f=69/69/0 sis=105) [2]/[0] r=0 lpr=105 pi=[68,105)/1 crt=54'1067 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:40:21 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 105 pg[10.12( v 54'1067 (0'0,54'1067] local-lis/les=68/69 n=4 ec=59/47 lis/c=68/68 les/c/f=69/69/0 sis=105) [2]/[0] r=0 lpr=105 pi=[68,105)/1 crt=54'1067 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 15 10:40:21 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=local.finder t=2025-12-15T10:40:21.854030768Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=plugin.store t=2025-12-15T10:40:21.854070029Z level=info msg="Plugins loaded" count=55 duration=84.85162ms
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=query_data t=2025-12-15T10:40:21.857428837Z level=info msg="Query Service initialization"
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=live.push_http t=2025-12-15T10:40:21.861610869Z level=info msg="Live Push Gateway initialization"
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=ngalert.migration t=2025-12-15T10:40:21.869323684Z level=info msg=Starting
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=ngalert.state.manager t=2025-12-15T10:40:21.891942885Z level=info msg="Running in alternative execution of Error/NoData mode"
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=infra.usagestats.collector t=2025-12-15T10:40:21.893857171Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=provisioning.datasources t=2025-12-15T10:40:21.896149808Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=provisioning.alerting t=2025-12-15T10:40:21.921031675Z level=info msg="starting to provision alerting"
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=provisioning.alerting t=2025-12-15T10:40:21.921057016Z level=info msg="finished to provision alerting"
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=grafanaStorageLogger t=2025-12-15T10:40:21.92151856Z level=info msg="Storage starting"
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=ngalert.state.manager t=2025-12-15T10:40:21.921807098Z level=info msg="Warming state cache for startup"
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=http.server t=2025-12-15T10:40:21.923998672Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=http.server t=2025-12-15T10:40:21.924301851Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Dec 15 10:40:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=ngalert.multiorg.alertmanager t=2025-12-15T10:40:21.941501164Z level=info msg="Starting MultiOrg Alertmanager"
Dec 15 10:40:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=ngalert.state.manager t=2025-12-15T10:40:22.007935675Z level=info msg="State cache has been initialized" states=0 duration=86.127496ms
Dec 15 10:40:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=ngalert.scheduler t=2025-12-15T10:40:22.008074759Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Dec 15 10:40:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=ticker t=2025-12-15T10:40:22.008172832Z level=info msg=starting first_tick=2025-12-15T10:40:30Z
Dec 15 10:40:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=provisioning.dashboard t=2025-12-15T10:40:22.009802769Z level=info msg="starting to provision dashboards"
Dec 15 10:40:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=provisioning.dashboard t=2025-12-15T10:40:22.032136881Z level=info msg="finished to provision dashboards"
Dec 15 10:40:22 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v30: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:22 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Dec 15 10:40:22 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec 15 10:40:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=plugins.update.checker t=2025-12-15T10:40:22.24939629Z level=info msg="Update check succeeded" duration=327.605623ms
Dec 15 10:40:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=grafana.update.checker t=2025-12-15T10:40:22.250143141Z level=info msg="Update check succeeded" duration=308.537054ms
Dec 15 10:40:22 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:40:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:22 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003e40 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:22 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:22 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:40:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=grafana-apiserver t=2025-12-15T10:40:22.328466Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Dec 15 10:40:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=grafana-apiserver t=2025-12-15T10:40:22.328927883Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Dec 15 10:40:22 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:22 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Dec 15 10:40:22 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Dec 15 10:40:22 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec 15 10:40:22 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 15 10:40:22 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:40:22 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:40:22 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-1
Dec 15 10:40:22 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-1
Dec 15 10:40:22 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:22 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:22 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:22.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:22 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:22 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:22 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:22 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:22.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:22 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Dec 15 10:40:22 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 15 10:40:22 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Dec 15 10:40:22 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Dec 15 10:40:22 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 106 pg[10.13( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=66/66 les/c/f=67/67/0 sis=106) [0] r=0 lpr=106 pi=[66,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:40:22 compute-0 ceph-mon[74356]: Reconfiguring crash.compute-1 (monmap changed)...
Dec 15 10:40:22 compute-0 ceph-mon[74356]: Reconfiguring daemon crash.compute-1 on compute-1
Dec 15 10:40:22 compute-0 ceph-mon[74356]: osdmap e105: 3 total, 3 up, 3 in
Dec 15 10:40:22 compute-0 ceph-mon[74356]: pgmap v30: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:22 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec 15 10:40:22 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:22 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:22 compute-0 ceph-mon[74356]: Reconfiguring osd.1 (monmap changed)...
Dec 15 10:40:22 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 15 10:40:22 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:40:22 compute-0 ceph-mon[74356]: Reconfiguring daemon osd.1 on compute-1
Dec 15 10:40:22 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 106 pg[10.12( v 54'1067 (0'0,54'1067] local-lis/les=105/106 n=4 ec=59/47 lis/c=68/68 les/c/f=69/69/0 sis=105) [2]/[0] async=[2] r=0 lpr=105 pi=[68,105)/1 crt=54'1067 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:40:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:40:22] "GET /metrics HTTP/1.1" 200 48237 "" "Prometheus/2.51.0"
Dec 15 10:40:22 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:40:22] "GET /metrics HTTP/1.1" 200 48237 "" "Prometheus/2.51.0"
Dec 15 10:40:23 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:40:23 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:23 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:40:23 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:23 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Dec 15 10:40:23 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Dec 15 10:40:23 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 15 10:40:23 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 15 10:40:23 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 15 10:40:23 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 15 10:40:23 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:40:23 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:40:23 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Dec 15 10:40:23 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Dec 15 10:40:23 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:23 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0004060 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:23 compute-0 sudo[101963]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntkyyvhmytxmivggoepbdcfuzohkuopq ; /usr/bin/python3'
Dec 15 10:40:23 compute-0 sudo[101963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:40:23 compute-0 python3[101965]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:40:23 compute-0 podman[101966]: 2025-12-15 10:40:23.561056445 +0000 UTC m=+0.061264841 container create 645787249d57036c4de52150c35f7c58d07692c6be89ac54a625d98d23751942 (image=quay.io/ceph/ceph:v19, name=elated_proskuriakova, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 15 10:40:23 compute-0 systemd[1]: Started libpod-conmon-645787249d57036c4de52150c35f7c58d07692c6be89ac54a625d98d23751942.scope.
Dec 15 10:40:23 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:40:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea3b4a7d2a20326dd88f50070dccd7b728ea8212eba18b2bf870c9d117cde70/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea3b4a7d2a20326dd88f50070dccd7b728ea8212eba18b2bf870c9d117cde70/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:23 compute-0 podman[101966]: 2025-12-15 10:40:23.629711411 +0000 UTC m=+0.129919797 container init 645787249d57036c4de52150c35f7c58d07692c6be89ac54a625d98d23751942 (image=quay.io/ceph/ceph:v19, name=elated_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:40:23 compute-0 podman[101966]: 2025-12-15 10:40:23.537215689 +0000 UTC m=+0.037424125 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:40:23 compute-0 podman[101966]: 2025-12-15 10:40:23.635335896 +0000 UTC m=+0.135544272 container start 645787249d57036c4de52150c35f7c58d07692c6be89ac54a625d98d23751942 (image=quay.io/ceph/ceph:v19, name=elated_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:40:23 compute-0 podman[101966]: 2025-12-15 10:40:23.638075266 +0000 UTC m=+0.138283672 container attach 645787249d57036c4de52150c35f7c58d07692c6be89ac54a625d98d23751942 (image=quay.io/ceph/ceph:v19, name=elated_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 15 10:40:23 compute-0 elated_proskuriakova[101981]: ERROR: invalid flag --daemon-type
Dec 15 10:40:23 compute-0 systemd[1]: libpod-645787249d57036c4de52150c35f7c58d07692c6be89ac54a625d98d23751942.scope: Deactivated successfully.
Dec 15 10:40:23 compute-0 podman[101966]: 2025-12-15 10:40:23.685170452 +0000 UTC m=+0.185378848 container died 645787249d57036c4de52150c35f7c58d07692c6be89ac54a625d98d23751942 (image=quay.io/ceph/ceph:v19, name=elated_proskuriakova, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:40:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-dea3b4a7d2a20326dd88f50070dccd7b728ea8212eba18b2bf870c9d117cde70-merged.mount: Deactivated successfully.
Dec 15 10:40:23 compute-0 podman[101966]: 2025-12-15 10:40:23.737174901 +0000 UTC m=+0.237383317 container remove 645787249d57036c4de52150c35f7c58d07692c6be89ac54a625d98d23751942 (image=quay.io/ceph/ceph:v19, name=elated_proskuriakova, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 15 10:40:23 compute-0 systemd[1]: libpod-conmon-645787249d57036c4de52150c35f7c58d07692c6be89ac54a625d98d23751942.scope: Deactivated successfully.
Dec 15 10:40:23 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 15 10:40:23 compute-0 sudo[101963]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:23 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:23 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 15 10:40:23 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:23 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Dec 15 10:40:23 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Dec 15 10:40:23 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 15 10:40:23 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 15 10:40:23 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 15 10:40:23 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 15 10:40:23 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:40:23 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:40:23 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Dec 15 10:40:23 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Dec 15 10:40:23 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Dec 15 10:40:23 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Dec 15 10:40:23 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Dec 15 10:40:23 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 107 pg[10.13( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=66/66 les/c/f=67/67/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[66,107)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:40:23 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 107 pg[10.12( v 54'1067 (0'0,54'1067] local-lis/les=105/106 n=4 ec=59/47 lis/c=105/68 les/c/f=106/69/0 sis=107 pruub=14.993339539s) [2] async=[2] r=-1 lpr=107 pi=[68,107)/1 crt=54'1067 mlcod 54'1067 active pruub 238.520996094s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:40:23 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 107 pg[10.13( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=66/66 les/c/f=67/67/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[66,107)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 15 10:40:23 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 107 pg[10.12( v 54'1067 (0'0,54'1067] local-lis/les=105/106 n=4 ec=59/47 lis/c=105/68 les/c/f=106/69/0 sis=107 pruub=14.993278503s) [2] r=-1 lpr=107 pi=[68,107)/1 crt=54'1067 mlcod 0'0 unknown NOTIFY pruub 238.520996094s@ mbc={}] state<Start>: transitioning to Stray
Dec 15 10:40:23 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 15 10:40:23 compute-0 ceph-mon[74356]: osdmap e106: 3 total, 3 up, 3 in
Dec 15 10:40:23 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:23 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:23 compute-0 ceph-mon[74356]: Reconfiguring mon.compute-1 (monmap changed)...
Dec 15 10:40:23 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 15 10:40:23 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 15 10:40:23 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:40:23 compute-0 ceph-mon[74356]: Reconfiguring daemon mon.compute-1 on compute-1
Dec 15 10:40:23 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:23 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:23 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 15 10:40:23 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 15 10:40:23 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:40:24 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v33: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:24 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Dec 15 10:40:24 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec 15 10:40:24 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:24 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0004060 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:24 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:24 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:24 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:24.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:24 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:24 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:24 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:24 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:24 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:24.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:24 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:40:24 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:24 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:40:24 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:24 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.gxhwsu (monmap changed)...
Dec 15 10:40:24 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.gxhwsu (monmap changed)...
Dec 15 10:40:24 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.gxhwsu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 15 10:40:24 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gxhwsu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 15 10:40:24 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 15 10:40:24 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 15 10:40:24 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:40:24 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:40:24 compute-0 ceph-mgr[74651]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.gxhwsu on compute-2
Dec 15 10:40:24 compute-0 ceph-mgr[74651]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.gxhwsu on compute-2
Dec 15 10:40:24 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Dec 15 10:40:24 compute-0 ceph-mon[74356]: Reconfiguring mon.compute-2 (monmap changed)...
Dec 15 10:40:24 compute-0 ceph-mon[74356]: Reconfiguring daemon mon.compute-2 on compute-2
Dec 15 10:40:24 compute-0 ceph-mon[74356]: osdmap e107: 3 total, 3 up, 3 in
Dec 15 10:40:24 compute-0 ceph-mon[74356]: pgmap v33: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:24 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec 15 10:40:24 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:24 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:24 compute-0 ceph-mon[74356]: Reconfiguring mgr.compute-2.gxhwsu (monmap changed)...
Dec 15 10:40:24 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gxhwsu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 15 10:40:24 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 15 10:40:24 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:40:24 compute-0 ceph-mon[74356]: Reconfiguring daemon mgr.compute-2.gxhwsu on compute-2
Dec 15 10:40:24 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 15 10:40:24 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Dec 15 10:40:24 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Dec 15 10:40:24 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 108 pg[10.14( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=73/73 les/c/f=74/74/0 sis=108) [0] r=0 lpr=108 pi=[73,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:40:25 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 15 10:40:25 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:25 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 15 10:40:25 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:25 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-alertmanager-api-host"} v 0)
Dec 15 10:40:25 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec 15 10:40:25 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec 15 10:40:25 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-grafana-api-url"} v 0)
Dec 15 10:40:25 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec 15 10:40:25 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec 15 10:40:25 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"} v 0)
Dec 15 10:40:25 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec 15 10:40:25 compute-0 ceph-mgr[74651]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec 15 10:40:25 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Dec 15 10:40:25 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:25 compute-0 ceph-mgr[74651]: [prometheus INFO root] Restarting engine...
Dec 15 10:40:25 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: [15/Dec/2025:10:40:25] ENGINE Bus STOPPING
Dec 15 10:40:25 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.error] [15/Dec/2025:10:40:25] ENGINE Bus STOPPING
Dec 15 10:40:25 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:25 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003e80 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:25 compute-0 sudo[102014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:40:25 compute-0 sudo[102014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:25 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: [15/Dec/2025:10:40:25] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Dec 15 10:40:25 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: [15/Dec/2025:10:40:25] ENGINE Bus STOPPED
Dec 15 10:40:25 compute-0 sudo[102014]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:25 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.error] [15/Dec/2025:10:40:25] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Dec 15 10:40:25 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.error] [15/Dec/2025:10:40:25] ENGINE Bus STOPPED
Dec 15 10:40:25 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: [15/Dec/2025:10:40:25] ENGINE Bus STARTING
Dec 15 10:40:25 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.error] [15/Dec/2025:10:40:25] ENGINE Bus STARTING
Dec 15 10:40:25 compute-0 sudo[102050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 15 10:40:25 compute-0 sudo[102050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:25 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: [15/Dec/2025:10:40:25] ENGINE Serving on http://:::9283
Dec 15 10:40:25 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: [15/Dec/2025:10:40:25] ENGINE Bus STARTED
Dec 15 10:40:25 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.error] [15/Dec/2025:10:40:25] ENGINE Serving on http://:::9283
Dec 15 10:40:25 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.error] [15/Dec/2025:10:40:25] ENGINE Bus STARTED
Dec 15 10:40:25 compute-0 ceph-mgr[74651]: [prometheus INFO root] Engine started.
Dec 15 10:40:25 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Dec 15 10:40:25 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 15 10:40:25 compute-0 ceph-mon[74356]: osdmap e108: 3 total, 3 up, 3 in
Dec 15 10:40:25 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:25 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:25 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec 15 10:40:25 compute-0 ceph-mon[74356]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec 15 10:40:25 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec 15 10:40:25 compute-0 ceph-mon[74356]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec 15 10:40:25 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec 15 10:40:25 compute-0 ceph-mon[74356]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec 15 10:40:25 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:25 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Dec 15 10:40:25 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Dec 15 10:40:25 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 109 pg[10.14( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=73/73 les/c/f=74/74/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[73,109)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:40:25 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 109 pg[10.14( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=73/73 les/c/f=74/74/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[73,109)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 15 10:40:25 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 109 pg[10.13( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=5 ec=59/47 lis/c=107/66 les/c/f=108/67/0 sis=109) [0] r=0 lpr=109 pi=[66,109)/1 luod=0'0 crt=54'1067 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:40:25 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 109 pg[10.13( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=5 ec=59/47 lis/c=107/66 les/c/f=108/67/0 sis=109) [0] r=0 lpr=109 pi=[66,109)/1 crt=54'1067 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:40:25 compute-0 podman[102145]: 2025-12-15 10:40:25.93504293 +0000 UTC m=+0.062536458 container exec 79ed8dc51b1852fd831764c2817cfa3745ae4937bc9015bdb965e376ab3cc58d (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 15 10:40:26 compute-0 podman[102145]: 2025-12-15 10:40:26.031489668 +0000 UTC m=+0.158983166 container exec_died 79ed8dc51b1852fd831764c2817cfa3745ae4937bc9015bdb965e376ab3cc58d (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 15 10:40:26 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v36: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Dec 15 10:40:26 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:26 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003a40 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:26 compute-0 podman[102266]: 2025-12-15 10:40:26.424933975 +0000 UTC m=+0.052101273 container exec 4840dadc82f34f3188660955046170e50347603e400e5beeba87756514ef7863 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:26 compute-0 podman[102266]: 2025-12-15 10:40:26.437505782 +0000 UTC m=+0.064673050 container exec_died 4840dadc82f34f3188660955046170e50347603e400e5beeba87756514ef7863 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:26 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:26 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:26 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:26.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:26 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:26 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0004060 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:26 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:26 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:26 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:26.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:26 compute-0 podman[102360]: 2025-12-15 10:40:26.761708035 +0000 UTC m=+0.062353283 container exec c0f38bc539f687c54b7eb01f058f3691b3a24f961ff23e54889334c42825f1f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:40:26 compute-0 podman[102360]: 2025-12-15 10:40:26.775510118 +0000 UTC m=+0.076155366 container exec_died c0f38bc539f687c54b7eb01f058f3691b3a24f961ff23e54889334c42825f1f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 15 10:40:26 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:40:26 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Dec 15 10:40:26 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Dec 15 10:40:26 compute-0 ceph-mon[74356]: osdmap e109: 3 total, 3 up, 3 in
Dec 15 10:40:26 compute-0 ceph-mon[74356]: pgmap v36: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Dec 15 10:40:26 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Dec 15 10:40:26 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 110 pg[10.13( v 54'1067 (0'0,54'1067] local-lis/les=109/110 n=5 ec=59/47 lis/c=107/66 les/c/f=108/67/0 sis=109) [0] r=0 lpr=109 pi=[66,109)/1 crt=54'1067 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:40:26 compute-0 podman[102421]: 2025-12-15 10:40:26.984283518 +0000 UTC m=+0.060442227 container exec 55276bcc9605a59cf148bdf11fc4eb753ca401193f2349bda31c6714ea73d19e (image=quay.io/ceph/haproxy:2.3, name=ceph-77365f67-614e-5a8d-b658-640395550c79-haproxy-nfs-cephfs-compute-0-ykblqa)
Dec 15 10:40:27 compute-0 podman[102443]: 2025-12-15 10:40:27.133500488 +0000 UTC m=+0.132926884 container exec_died 55276bcc9605a59cf148bdf11fc4eb753ca401193f2349bda31c6714ea73d19e (image=quay.io/ceph/haproxy:2.3, name=ceph-77365f67-614e-5a8d-b658-640395550c79-haproxy-nfs-cephfs-compute-0-ykblqa)
Dec 15 10:40:27 compute-0 podman[102421]: 2025-12-15 10:40:27.173868468 +0000 UTC m=+0.250027217 container exec_died 55276bcc9605a59cf148bdf11fc4eb753ca401193f2349bda31c6714ea73d19e (image=quay.io/ceph/haproxy:2.3, name=ceph-77365f67-614e-5a8d-b658-640395550c79-haproxy-nfs-cephfs-compute-0-ykblqa)
Dec 15 10:40:27 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:27 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:27 compute-0 podman[102489]: 2025-12-15 10:40:27.372242364 +0000 UTC m=+0.048499668 container exec eb383ee2660aa4da80291a00f1592a5a221d1461ab8a14db8d6bf5db83163774 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-nfs-cephfs-compute-0-gdchmd, vendor=Red Hat, Inc., release=1793, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, distribution-scope=public, version=2.2.4, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived)
Dec 15 10:40:27 compute-0 podman[102489]: 2025-12-15 10:40:27.381373332 +0000 UTC m=+0.057630626 container exec_died eb383ee2660aa4da80291a00f1592a5a221d1461ab8a14db8d6bf5db83163774 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-nfs-cephfs-compute-0-gdchmd, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, release=1793, com.redhat.component=keepalived-container, architecture=x86_64, description=keepalived for Ceph, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, name=keepalived)
Dec 15 10:40:27 compute-0 podman[102553]: 2025-12-15 10:40:27.997313139 +0000 UTC m=+0.465255706 container exec 89a20608330788e4c3363bfd799e4feb78e7548f505117a7fc750d8eecfef10f (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:28 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Dec 15 10:40:28 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v38: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 726 B/s rd, 0 op/s; 26 B/s, 0 objects/s recovering
Dec 15 10:40:28 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 15 10:40:28 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:40:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:40:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:40:28 compute-0 ceph-mon[74356]: osdmap e110: 3 total, 3 up, 3 in
Dec 15 10:40:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:40:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:40:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:40:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:40:28 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:28 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003ea0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:28 compute-0 podman[102553]: 2025-12-15 10:40:28.366826786 +0000 UTC m=+0.834769363 container exec_died 89a20608330788e4c3363bfd799e4feb78e7548f505117a7fc750d8eecfef10f (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:28 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:28 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:28 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:28.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:28 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:28 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003a40 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:28 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Dec 15 10:40:28 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:28 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:28 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:28.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:28 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Dec 15 10:40:28 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 111 pg[10.14( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=5 ec=59/47 lis/c=109/73 les/c/f=110/74/0 sis=111) [0] r=0 lpr=111 pi=[73,111)/1 luod=0'0 crt=54'1067 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:40:28 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 111 pg[10.14( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=5 ec=59/47 lis/c=109/73 les/c/f=110/74/0 sis=111) [0] r=0 lpr=111 pi=[73,111)/1 crt=54'1067 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:40:28 compute-0 podman[102630]: 2025-12-15 10:40:28.683862518 +0000 UTC m=+0.052967198 container exec 904de0ff1c362d9a2eedc2ea5dd43957c068393257172a115a9781fe3109347c (image=quay.io/ceph/grafana:10.4.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:40:28 compute-0 podman[102630]: 2025-12-15 10:40:28.845505392 +0000 UTC m=+0.214610332 container exec_died 904de0ff1c362d9a2eedc2ea5dd43957c068393257172a115a9781fe3109347c (image=quay.io/ceph/grafana:10.4.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:40:29 compute-0 podman[102742]: 2025-12-15 10:40:29.221301162 +0000 UTC m=+0.069919944 container exec 811bf452ef3ce2cd1829f815f500a947c1c9dba459a203c1e31a5cfea42f0cfb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:29 compute-0 ceph-mon[74356]: pgmap v38: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 726 B/s rd, 0 op/s; 26 B/s, 0 objects/s recovering
Dec 15 10:40:29 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:40:29 compute-0 ceph-mon[74356]: osdmap e111: 3 total, 3 up, 3 in
Dec 15 10:40:29 compute-0 podman[102742]: 2025-12-15 10:40:29.262135596 +0000 UTC m=+0.110754348 container exec_died 811bf452ef3ce2cd1829f815f500a947c1c9dba459a203c1e31a5cfea42f0cfb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:40:29 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:29 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0004060 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:29 compute-0 sudo[102050]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:29 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:40:29 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:29 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:40:29 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:29 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:40:29 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:40:29 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 15 10:40:29 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 15 10:40:29 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 15 10:40:29 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:29 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 15 10:40:29 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:29 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 15 10:40:29 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 15 10:40:29 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 15 10:40:29 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 15 10:40:29 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:40:29 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:40:29 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Dec 15 10:40:29 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Dec 15 10:40:29 compute-0 sudo[102787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:40:29 compute-0 sudo[102787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:29 compute-0 sudo[102787]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:29 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Dec 15 10:40:29 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 112 pg[10.14( v 54'1067 (0'0,54'1067] local-lis/les=111/112 n=5 ec=59/47 lis/c=109/73 les/c/f=110/74/0 sis=111) [0] r=0 lpr=111 pi=[73,111)/1 crt=54'1067 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:40:29 compute-0 sudo[102812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 77365f67-614e-5a8d-b658-640395550c79 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 15 10:40:29 compute-0 sudo[102812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:29 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0[101604]: ts=2025-12-15T10:40:29.689Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.00270469s
Dec 15 10:40:29 compute-0 podman[102875]: 2025-12-15 10:40:29.968135064 +0000 UTC m=+0.055856063 container create 1200f69e1f1cf320192cccdf7ed6fc4571c7714d47a2befd8285352924f1f0c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_grothendieck, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 15 10:40:30 compute-0 systemd[1]: Started libpod-conmon-1200f69e1f1cf320192cccdf7ed6fc4571c7714d47a2befd8285352924f1f0c4.scope.
Dec 15 10:40:30 compute-0 podman[102875]: 2025-12-15 10:40:29.93482268 +0000 UTC m=+0.022543769 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:40:30 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:40:30 compute-0 podman[102875]: 2025-12-15 10:40:30.056734833 +0000 UTC m=+0.144455862 container init 1200f69e1f1cf320192cccdf7ed6fc4571c7714d47a2befd8285352924f1f0c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 15 10:40:30 compute-0 podman[102875]: 2025-12-15 10:40:30.065652754 +0000 UTC m=+0.153373753 container start 1200f69e1f1cf320192cccdf7ed6fc4571c7714d47a2befd8285352924f1f0c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_grothendieck, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:40:30 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v41: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 2 objects/s recovering
Dec 15 10:40:30 compute-0 brave_grothendieck[102892]: 167 167
Dec 15 10:40:30 compute-0 systemd[1]: libpod-1200f69e1f1cf320192cccdf7ed6fc4571c7714d47a2befd8285352924f1f0c4.scope: Deactivated successfully.
Dec 15 10:40:30 compute-0 podman[102875]: 2025-12-15 10:40:30.071712031 +0000 UTC m=+0.159433060 container attach 1200f69e1f1cf320192cccdf7ed6fc4571c7714d47a2befd8285352924f1f0c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_grothendieck, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 15 10:40:30 compute-0 podman[102875]: 2025-12-15 10:40:30.07205919 +0000 UTC m=+0.159780179 container died 1200f69e1f1cf320192cccdf7ed6fc4571c7714d47a2befd8285352924f1f0c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:40:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-57dafab191620a615926fde3c81987a89ba2df20d2a752dbaf58001b77f08ca6-merged.mount: Deactivated successfully.
Dec 15 10:40:30 compute-0 podman[102875]: 2025-12-15 10:40:30.130248451 +0000 UTC m=+0.217969450 container remove 1200f69e1f1cf320192cccdf7ed6fc4571c7714d47a2befd8285352924f1f0c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_grothendieck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 15 10:40:30 compute-0 systemd[1]: libpod-conmon-1200f69e1f1cf320192cccdf7ed6fc4571c7714d47a2befd8285352924f1f0c4.scope: Deactivated successfully.
Dec 15 10:40:30 compute-0 podman[102915]: 2025-12-15 10:40:30.301924897 +0000 UTC m=+0.043929485 container create d6893ef3d3b936da98c54391a9cf03b68b4d8bd524c4c8bef6588de60c358276 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 15 10:40:30 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:30 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:30 compute-0 systemd[1]: Started libpod-conmon-d6893ef3d3b936da98c54391a9cf03b68b4d8bd524c4c8bef6588de60c358276.scope.
Dec 15 10:40:30 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360a19f76ef9a65bcd48087d7beeb8cbb351c3728c29070325d96c298ebdf5bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360a19f76ef9a65bcd48087d7beeb8cbb351c3728c29070325d96c298ebdf5bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360a19f76ef9a65bcd48087d7beeb8cbb351c3728c29070325d96c298ebdf5bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360a19f76ef9a65bcd48087d7beeb8cbb351c3728c29070325d96c298ebdf5bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360a19f76ef9a65bcd48087d7beeb8cbb351c3728c29070325d96c298ebdf5bb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:30 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:30 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:30 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:40:30 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 15 10:40:30 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:30 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:30 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 15 10:40:30 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 15 10:40:30 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:40:30 compute-0 ceph-mon[74356]: osdmap e112: 3 total, 3 up, 3 in
Dec 15 10:40:30 compute-0 podman[102915]: 2025-12-15 10:40:30.283255751 +0000 UTC m=+0.025260369 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:40:30 compute-0 podman[102915]: 2025-12-15 10:40:30.385065256 +0000 UTC m=+0.127069864 container init d6893ef3d3b936da98c54391a9cf03b68b4d8bd524c4c8bef6588de60c358276 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_banach, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 15 10:40:30 compute-0 podman[102915]: 2025-12-15 10:40:30.392221295 +0000 UTC m=+0.134225873 container start d6893ef3d3b936da98c54391a9cf03b68b4d8bd524c4c8bef6588de60c358276 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_banach, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 15 10:40:30 compute-0 podman[102915]: 2025-12-15 10:40:30.397162749 +0000 UTC m=+0.139167347 container attach d6893ef3d3b936da98c54391a9cf03b68b4d8bd524c4c8bef6588de60c358276 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_banach, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 15 10:40:30 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:30 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 15 10:40:30 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:30.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 15 10:40:30 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:30 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003ec0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:30 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:30 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:30 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:30.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:30 compute-0 recursing_banach[102932]: --> passed data devices: 0 physical, 1 LVM
Dec 15 10:40:30 compute-0 recursing_banach[102932]: --> All data devices are unavailable
Dec 15 10:40:30 compute-0 systemd[1]: libpod-d6893ef3d3b936da98c54391a9cf03b68b4d8bd524c4c8bef6588de60c358276.scope: Deactivated successfully.
Dec 15 10:40:30 compute-0 podman[102948]: 2025-12-15 10:40:30.80309257 +0000 UTC m=+0.025023042 container died d6893ef3d3b936da98c54391a9cf03b68b4d8bd524c4c8bef6588de60c358276 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_banach, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:40:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-360a19f76ef9a65bcd48087d7beeb8cbb351c3728c29070325d96c298ebdf5bb-merged.mount: Deactivated successfully.
Dec 15 10:40:30 compute-0 podman[102948]: 2025-12-15 10:40:30.926850966 +0000 UTC m=+0.148781448 container remove d6893ef3d3b936da98c54391a9cf03b68b4d8bd524c4c8bef6588de60c358276 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_banach, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 15 10:40:30 compute-0 systemd[1]: libpod-conmon-d6893ef3d3b936da98c54391a9cf03b68b4d8bd524c4c8bef6588de60c358276.scope: Deactivated successfully.
Dec 15 10:40:30 compute-0 sudo[102812]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:31 compute-0 sudo[102964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:40:31 compute-0 sudo[102964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:31 compute-0 sudo[102964]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:31 compute-0 sudo[102989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 77365f67-614e-5a8d-b658-640395550c79 -- lvm list --format json
Dec 15 10:40:31 compute-0 sudo[102989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:31 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:31 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc002830 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:31 compute-0 podman[103056]: 2025-12-15 10:40:31.545890443 +0000 UTC m=+0.026102599 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:40:31 compute-0 ceph-mon[74356]: pgmap v41: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 2 objects/s recovering
Dec 15 10:40:31 compute-0 podman[103056]: 2025-12-15 10:40:31.692024567 +0000 UTC m=+0.172236693 container create 7cc87c7a1b8802e2907391b1e207a34042b106ac58b3ad34fca7c8b5eca9f20f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_engelbart, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Dec 15 10:40:31 compute-0 systemd[1]: Started libpod-conmon-7cc87c7a1b8802e2907391b1e207a34042b106ac58b3ad34fca7c8b5eca9f20f.scope.
Dec 15 10:40:31 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:40:31 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:40:31 compute-0 podman[103056]: 2025-12-15 10:40:31.870219149 +0000 UTC m=+0.350431295 container init 7cc87c7a1b8802e2907391b1e207a34042b106ac58b3ad34fca7c8b5eca9f20f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_engelbart, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 15 10:40:31 compute-0 podman[103056]: 2025-12-15 10:40:31.880545391 +0000 UTC m=+0.360757517 container start 7cc87c7a1b8802e2907391b1e207a34042b106ac58b3ad34fca7c8b5eca9f20f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 15 10:40:31 compute-0 hardcore_engelbart[103072]: 167 167
Dec 15 10:40:31 compute-0 systemd[1]: libpod-7cc87c7a1b8802e2907391b1e207a34042b106ac58b3ad34fca7c8b5eca9f20f.scope: Deactivated successfully.
Dec 15 10:40:31 compute-0 podman[103056]: 2025-12-15 10:40:31.955941455 +0000 UTC m=+0.436153581 container attach 7cc87c7a1b8802e2907391b1e207a34042b106ac58b3ad34fca7c8b5eca9f20f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_engelbart, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 15 10:40:31 compute-0 podman[103056]: 2025-12-15 10:40:31.9563488 +0000 UTC m=+0.436560926 container died 7cc87c7a1b8802e2907391b1e207a34042b106ac58b3ad34fca7c8b5eca9f20f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:40:32 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v42: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec 15 10:40:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-67ed28356d000bf44328bd4173f9679e31e7cc47a37617a59afbdeeb95a84f3d-merged.mount: Deactivated successfully.
Dec 15 10:40:32 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:32 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0004060 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:32 compute-0 podman[103056]: 2025-12-15 10:40:32.419548131 +0000 UTC m=+0.899760287 container remove 7cc87c7a1b8802e2907391b1e207a34042b106ac58b3ad34fca7c8b5eca9f20f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:40:32 compute-0 systemd[1]: libpod-conmon-7cc87c7a1b8802e2907391b1e207a34042b106ac58b3ad34fca7c8b5eca9f20f.scope: Deactivated successfully.
Dec 15 10:40:32 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:32 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:32 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:32.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:32 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:32 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:32 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:32 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:32 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:32.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:32 compute-0 podman[103100]: 2025-12-15 10:40:32.573255286 +0000 UTC m=+0.022254535 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:40:32 compute-0 podman[103100]: 2025-12-15 10:40:32.684912263 +0000 UTC m=+0.133911492 container create 5deabb0d6b5d5b1d2746051316c620c6ad22d7e039a374cd1a70421ea198c895 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:40:32 compute-0 systemd[1]: Started libpod-conmon-5deabb0d6b5d5b1d2746051316c620c6ad22d7e039a374cd1a70421ea198c895.scope.
Dec 15 10:40:32 compute-0 ceph-mon[74356]: pgmap v42: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec 15 10:40:32 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:40:32] "GET /metrics HTTP/1.1" 200 48223 "" "Prometheus/2.51.0"
Dec 15 10:40:32 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:40:32] "GET /metrics HTTP/1.1" 200 48223 "" "Prometheus/2.51.0"
Dec 15 10:40:32 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd0ceb3d6cb236cf04e3036a314f07de7de9d5d6deaf9adbf48a0ad765359a4f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd0ceb3d6cb236cf04e3036a314f07de7de9d5d6deaf9adbf48a0ad765359a4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd0ceb3d6cb236cf04e3036a314f07de7de9d5d6deaf9adbf48a0ad765359a4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd0ceb3d6cb236cf04e3036a314f07de7de9d5d6deaf9adbf48a0ad765359a4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:33 compute-0 podman[103100]: 2025-12-15 10:40:33.060316182 +0000 UTC m=+0.509315431 container init 5deabb0d6b5d5b1d2746051316c620c6ad22d7e039a374cd1a70421ea198c895 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_grothendieck, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 15 10:40:33 compute-0 podman[103100]: 2025-12-15 10:40:33.07322171 +0000 UTC m=+0.522220939 container start 5deabb0d6b5d5b1d2746051316c620c6ad22d7e039a374cd1a70421ea198c895 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_grothendieck, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:40:33 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:33 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e4000df0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:33 compute-0 podman[103100]: 2025-12-15 10:40:33.348327943 +0000 UTC m=+0.797327232 container attach 5deabb0d6b5d5b1d2746051316c620c6ad22d7e039a374cd1a70421ea198c895 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]: {
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:     "0": [
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:         {
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:             "devices": [
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:                 "/dev/loop3"
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:             ],
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:             "lv_name": "ceph_lv0",
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:             "lv_size": "21470642176",
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MCSOGR-6V69-UVGq-8FpK-DBjb-LyDE-FLUPxJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=77365f67-614e-5a8d-b658-640395550c79,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7690eca0-4e87-4157-a045-1912448da925,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:             "lv_uuid": "MCSOGR-6V69-UVGq-8FpK-DBjb-LyDE-FLUPxJ",
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:             "name": "ceph_lv0",
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:             "tags": {
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:                 "ceph.block_uuid": "MCSOGR-6V69-UVGq-8FpK-DBjb-LyDE-FLUPxJ",
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:                 "ceph.cephx_lockbox_secret": "",
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:                 "ceph.cluster_fsid": "77365f67-614e-5a8d-b658-640395550c79",
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:                 "ceph.cluster_name": "ceph",
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:                 "ceph.crush_device_class": "",
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:                 "ceph.encrypted": "0",
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:                 "ceph.osd_fsid": "7690eca0-4e87-4157-a045-1912448da925",
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:                 "ceph.osd_id": "0",
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:                 "ceph.type": "block",
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:                 "ceph.vdo": "0",
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:                 "ceph.with_tpm": "0"
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:             },
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:             "type": "block",
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:             "vg_name": "ceph_vg0"
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:         }
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]:     ]
Dec 15 10:40:33 compute-0 nice_grothendieck[103117]: }
Dec 15 10:40:33 compute-0 systemd[1]: libpod-5deabb0d6b5d5b1d2746051316c620c6ad22d7e039a374cd1a70421ea198c895.scope: Deactivated successfully.
Dec 15 10:40:33 compute-0 podman[103100]: 2025-12-15 10:40:33.421366949 +0000 UTC m=+0.870366178 container died 5deabb0d6b5d5b1d2746051316c620c6ad22d7e039a374cd1a70421ea198c895 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 15 10:40:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd0ceb3d6cb236cf04e3036a314f07de7de9d5d6deaf9adbf48a0ad765359a4f-merged.mount: Deactivated successfully.
Dec 15 10:40:33 compute-0 podman[103100]: 2025-12-15 10:40:33.551388857 +0000 UTC m=+1.000388076 container remove 5deabb0d6b5d5b1d2746051316c620c6ad22d7e039a374cd1a70421ea198c895 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_grothendieck, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:40:33 compute-0 systemd[1]: libpod-conmon-5deabb0d6b5d5b1d2746051316c620c6ad22d7e039a374cd1a70421ea198c895.scope: Deactivated successfully.
Dec 15 10:40:33 compute-0 sudo[102989]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:33 compute-0 sudo[103141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:40:33 compute-0 sudo[103141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:33 compute-0 sudo[103141]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:33 compute-0 sudo[103166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 77365f67-614e-5a8d-b658-640395550c79 -- raw list --format json
Dec 15 10:40:33 compute-0 sudo[103166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:33 compute-0 sudo[103172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 15 10:40:33 compute-0 sudo[103172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:33 compute-0 sudo[103172]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:34 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v43: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 1 objects/s recovering
Dec 15 10:40:34 compute-0 podman[103255]: 2025-12-15 10:40:34.106862916 +0000 UTC m=+0.022350209 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:40:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:34 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc002830 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:34 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:34 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:34 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc002830 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:34 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:34.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:34 compute-0 podman[103255]: 2025-12-15 10:40:34.479399429 +0000 UTC m=+0.394886692 container create 7b347eb5d6c2a80e9c381ed3bac65761581b8a7fbca5ca26fcc609ef5218de31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_napier, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:40:34 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:34 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:34 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:34.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:34 compute-0 systemd[1]: Started libpod-conmon-7b347eb5d6c2a80e9c381ed3bac65761581b8a7fbca5ca26fcc609ef5218de31.scope.
Dec 15 10:40:34 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:40:34 compute-0 sudo[103298]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcuoerpoevghbagjomrdqizxcwjgmass ; /usr/bin/python3'
Dec 15 10:40:34 compute-0 sudo[103298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:40:34 compute-0 podman[103255]: 2025-12-15 10:40:34.57523937 +0000 UTC m=+0.490726663 container init 7b347eb5d6c2a80e9c381ed3bac65761581b8a7fbca5ca26fcc609ef5218de31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_napier, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 15 10:40:34 compute-0 podman[103255]: 2025-12-15 10:40:34.58172256 +0000 UTC m=+0.497209833 container start 7b347eb5d6c2a80e9c381ed3bac65761581b8a7fbca5ca26fcc609ef5218de31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_napier, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 15 10:40:34 compute-0 podman[103255]: 2025-12-15 10:40:34.585093705 +0000 UTC m=+0.500580988 container attach 7b347eb5d6c2a80e9c381ed3bac65761581b8a7fbca5ca26fcc609ef5218de31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_napier, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 15 10:40:34 compute-0 charming_napier[103299]: 167 167
Dec 15 10:40:34 compute-0 systemd[1]: libpod-7b347eb5d6c2a80e9c381ed3bac65761581b8a7fbca5ca26fcc609ef5218de31.scope: Deactivated successfully.
Dec 15 10:40:34 compute-0 podman[103255]: 2025-12-15 10:40:34.587550236 +0000 UTC m=+0.503037509 container died 7b347eb5d6c2a80e9c381ed3bac65761581b8a7fbca5ca26fcc609ef5218de31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_napier, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:40:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-807ee7cc129615543e18cb4dccd678060ece17bcf1e5aba3907ef6a522cb13a6-merged.mount: Deactivated successfully.
Dec 15 10:40:34 compute-0 podman[103255]: 2025-12-15 10:40:34.640530099 +0000 UTC m=+0.556017392 container remove 7b347eb5d6c2a80e9c381ed3bac65761581b8a7fbca5ca26fcc609ef5218de31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_napier, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:40:34 compute-0 systemd[1]: libpod-conmon-7b347eb5d6c2a80e9c381ed3bac65761581b8a7fbca5ca26fcc609ef5218de31.scope: Deactivated successfully.
Dec 15 10:40:34 compute-0 python3[103303]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:40:34 compute-0 podman[103322]: 2025-12-15 10:40:34.785935696 +0000 UTC m=+0.050474921 container create ac2702d10d847029a936cdd21866df66d0b3ebbe574ecd3befff9ce8de289c88 (image=quay.io/ceph/ceph:v19, name=reverent_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 15 10:40:34 compute-0 systemd[1]: Started libpod-conmon-ac2702d10d847029a936cdd21866df66d0b3ebbe574ecd3befff9ce8de289c88.scope.
Dec 15 10:40:34 compute-0 podman[103338]: 2025-12-15 10:40:34.817099831 +0000 UTC m=+0.046918980 container create 2eb6f965e8458b6e46e9623e03a90ab5fafc138fa57423eed69c516da2fa6dff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_elion, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True)
Dec 15 10:40:34 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae99b14dbff25533ef0b193beced453ab5c9f524b87d3ad4ae882ae713dca5d6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae99b14dbff25533ef0b193beced453ab5c9f524b87d3ad4ae882ae713dca5d6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:34 compute-0 systemd[1]: Started libpod-conmon-2eb6f965e8458b6e46e9623e03a90ab5fafc138fa57423eed69c516da2fa6dff.scope.
Dec 15 10:40:34 compute-0 podman[103322]: 2025-12-15 10:40:34.765612543 +0000 UTC m=+0.030151798 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:40:34 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:40:34 compute-0 podman[103322]: 2025-12-15 10:40:34.865486733 +0000 UTC m=+0.130025968 container init ac2702d10d847029a936cdd21866df66d0b3ebbe574ecd3befff9ce8de289c88 (image=quay.io/ceph/ceph:v19, name=reverent_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 15 10:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1db1fa4b90a7ee9f9871c69e453d113b9272e2db0a385a796f22a23a55a888f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1db1fa4b90a7ee9f9871c69e453d113b9272e2db0a385a796f22a23a55a888f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1db1fa4b90a7ee9f9871c69e453d113b9272e2db0a385a796f22a23a55a888f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1db1fa4b90a7ee9f9871c69e453d113b9272e2db0a385a796f22a23a55a888f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:34 compute-0 podman[103338]: 2025-12-15 10:40:34.878369081 +0000 UTC m=+0.108188250 container init 2eb6f965e8458b6e46e9623e03a90ab5fafc138fa57423eed69c516da2fa6dff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_elion, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 15 10:40:34 compute-0 podman[103322]: 2025-12-15 10:40:34.879884197 +0000 UTC m=+0.144423422 container start ac2702d10d847029a936cdd21866df66d0b3ebbe574ecd3befff9ce8de289c88 (image=quay.io/ceph/ceph:v19, name=reverent_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:40:34 compute-0 podman[103322]: 2025-12-15 10:40:34.884502308 +0000 UTC m=+0.149041573 container attach ac2702d10d847029a936cdd21866df66d0b3ebbe574ecd3befff9ce8de289c88 (image=quay.io/ceph/ceph:v19, name=reverent_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 15 10:40:34 compute-0 podman[103338]: 2025-12-15 10:40:34.887366224 +0000 UTC m=+0.117185373 container start 2eb6f965e8458b6e46e9623e03a90ab5fafc138fa57423eed69c516da2fa6dff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:40:34 compute-0 podman[103338]: 2025-12-15 10:40:34.890416117 +0000 UTC m=+0.120235286 container attach 2eb6f965e8458b6e46e9623e03a90ab5fafc138fa57423eed69c516da2fa6dff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec 15 10:40:34 compute-0 podman[103338]: 2025-12-15 10:40:34.797139872 +0000 UTC m=+0.026959041 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:40:34 compute-0 reverent_leavitt[103356]: ERROR: invalid flag --daemon-type
Dec 15 10:40:34 compute-0 systemd[1]: libpod-ac2702d10d847029a936cdd21866df66d0b3ebbe574ecd3befff9ce8de289c88.scope: Deactivated successfully.
Dec 15 10:40:34 compute-0 podman[103322]: 2025-12-15 10:40:34.940923578 +0000 UTC m=+0.205462793 container died ac2702d10d847029a936cdd21866df66d0b3ebbe574ecd3befff9ce8de289c88 (image=quay.io/ceph/ceph:v19, name=reverent_leavitt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:40:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae99b14dbff25533ef0b193beced453ab5c9f524b87d3ad4ae882ae713dca5d6-merged.mount: Deactivated successfully.
Dec 15 10:40:34 compute-0 podman[103322]: 2025-12-15 10:40:34.973995983 +0000 UTC m=+0.238535208 container remove ac2702d10d847029a936cdd21866df66d0b3ebbe574ecd3befff9ce8de289c88 (image=quay.io/ceph/ceph:v19, name=reverent_leavitt, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 15 10:40:34 compute-0 systemd[1]: libpod-conmon-ac2702d10d847029a936cdd21866df66d0b3ebbe574ecd3befff9ce8de289c88.scope: Deactivated successfully.
Dec 15 10:40:34 compute-0 sudo[103298]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:35 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc002830 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:35 compute-0 lvm[103466]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 15 10:40:35 compute-0 lvm[103466]: VG ceph_vg0 finished
Dec 15 10:40:35 compute-0 jolly_elion[103362]: {}
Dec 15 10:40:35 compute-0 systemd[1]: libpod-2eb6f965e8458b6e46e9623e03a90ab5fafc138fa57423eed69c516da2fa6dff.scope: Deactivated successfully.
Dec 15 10:40:35 compute-0 podman[103338]: 2025-12-15 10:40:35.590615899 +0000 UTC m=+0.820435068 container died 2eb6f965e8458b6e46e9623e03a90ab5fafc138fa57423eed69c516da2fa6dff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_elion, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 15 10:40:35 compute-0 systemd[1]: libpod-2eb6f965e8458b6e46e9623e03a90ab5fafc138fa57423eed69c516da2fa6dff.scope: Consumed 1.119s CPU time.
Dec 15 10:40:35 compute-0 ceph-mon[74356]: pgmap v43: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 1 objects/s recovering
Dec 15 10:40:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1db1fa4b90a7ee9f9871c69e453d113b9272e2db0a385a796f22a23a55a888f-merged.mount: Deactivated successfully.
Dec 15 10:40:35 compute-0 podman[103338]: 2025-12-15 10:40:35.747372137 +0000 UTC m=+0.977191286 container remove 2eb6f965e8458b6e46e9623e03a90ab5fafc138fa57423eed69c516da2fa6dff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_elion, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:40:35 compute-0 systemd[1]: libpod-conmon-2eb6f965e8458b6e46e9623e03a90ab5fafc138fa57423eed69c516da2fa6dff.scope: Deactivated successfully.
Dec 15 10:40:35 compute-0 sudo[103166]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:35 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:40:35 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:35 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:40:35 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:35 compute-0 sudo[103482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 15 10:40:35 compute-0 sudo[103482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:35 compute-0 sudo[103482]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:36 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v44: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s; 13 B/s, 1 objects/s recovering
Dec 15 10:40:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Dec 15 10:40:36 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 15 10:40:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:36 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0004060 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:36 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:36 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:36 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:36.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:36 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003f50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:36 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:36 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:36 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:36.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:36 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:36 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:40:36 compute-0 ceph-mon[74356]: pgmap v44: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s; 13 B/s, 1 objects/s recovering
Dec 15 10:40:36 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 15 10:40:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Dec 15 10:40:36 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 15 10:40:36 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Dec 15 10:40:36 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Dec 15 10:40:37 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:37 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:37 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 15 10:40:37 compute-0 ceph-mon[74356]: osdmap e113: 3 total, 3 up, 3 in
Dec 15 10:40:38 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v46: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 358 B/s rd, 0 op/s
Dec 15 10:40:38 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Dec 15 10:40:38 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec 15 10:40:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:38 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc002830 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:38 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0004060 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:38 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:38 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000037s ======
Dec 15 10:40:38 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:38.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Dec 15 10:40:38 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:38 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000038s ======
Dec 15 10:40:38 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:38.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Dec 15 10:40:38 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Dec 15 10:40:38 compute-0 ceph-mon[74356]: pgmap v46: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 358 B/s rd, 0 op/s
Dec 15 10:40:38 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec 15 10:40:38 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 15 10:40:38 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Dec 15 10:40:38 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Dec 15 10:40:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:39 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003f70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:40 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 15 10:40:40 compute-0 ceph-mon[74356]: osdmap e114: 3 total, 3 up, 3 in
Dec 15 10:40:40 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v48: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Dec 15 10:40:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Dec 15 10:40:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec 15 10:40:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:40 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:40 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc002830 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:40 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:40 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:40 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:40.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:40 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:40 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:40 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:40.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:41 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Dec 15 10:40:41 compute-0 ceph-mon[74356]: pgmap v48: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Dec 15 10:40:41 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec 15 10:40:41 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 15 10:40:41 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Dec 15 10:40:41 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Dec 15 10:40:41 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:41 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:41 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:40:42 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v50: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:42 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Dec 15 10:40:42 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec 15 10:40:42 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 15 10:40:42 compute-0 ceph-mon[74356]: osdmap e115: 3 total, 3 up, 3 in
Dec 15 10:40:42 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Dec 15 10:40:42 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 15 10:40:42 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Dec 15 10:40:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:42 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003f90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:42 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Dec 15 10:40:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:42 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:42 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:42 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:42 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:42.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:42 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:42 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:42 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:42.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:40:42] "GET /metrics HTTP/1.1" 200 48223 "" "Prometheus/2.51.0"
Dec 15 10:40:42 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:40:42] "GET /metrics HTTP/1.1" 200 48223 "" "Prometheus/2.51.0"
Dec 15 10:40:43 compute-0 ceph-mon[74356]: pgmap v50: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:43 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec 15 10:40:43 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 15 10:40:43 compute-0 ceph-mon[74356]: osdmap e116: 3 total, 3 up, 3 in
Dec 15 10:40:43 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 15 10:40:43 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:40:43 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:43 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc002830 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:44 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v52: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Dec 15 10:40:44 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec 15 10:40:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Dec 15 10:40:44 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 15 10:40:44 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Dec 15 10:40:44 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Dec 15 10:40:44 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:40:44 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec 15 10:40:44 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:44 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:44 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:44 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003fb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:44 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:44 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:44 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:44.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:44 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:44 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:44 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:44.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:45 compute-0 sudo[103539]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtgozecofpvdhhqubixvodqaneyuhvmg ; /usr/bin/python3'
Dec 15 10:40:45 compute-0 sudo[103539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:40:45 compute-0 python3[103541]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:40:45 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:45 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:45 compute-0 podman[103543]: 2025-12-15 10:40:45.290422708 +0000 UTC m=+0.025846559 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:40:45 compute-0 ceph-mon[74356]: pgmap v52: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:45 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 15 10:40:45 compute-0 ceph-mon[74356]: osdmap e117: 3 total, 3 up, 3 in
Dec 15 10:40:45 compute-0 podman[103543]: 2025-12-15 10:40:45.41384982 +0000 UTC m=+0.149273641 container create 2e7dc8b37d42b6d33cc8d7b7304efe6642c0fb75c5a826b1620c61f876841919 (image=quay.io/ceph/ceph:v19, name=naughty_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 15 10:40:45 compute-0 systemd[1]: Started libpod-conmon-2e7dc8b37d42b6d33cc8d7b7304efe6642c0fb75c5a826b1620c61f876841919.scope.
Dec 15 10:40:45 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:40:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f97a10bd9779aeae4cb5ea60874ef0908aea9186f7c88ab699078b8d4e4f0cd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f97a10bd9779aeae4cb5ea60874ef0908aea9186f7c88ab699078b8d4e4f0cd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:45 compute-0 podman[103543]: 2025-12-15 10:40:45.751467469 +0000 UTC m=+0.486891300 container init 2e7dc8b37d42b6d33cc8d7b7304efe6642c0fb75c5a826b1620c61f876841919 (image=quay.io/ceph/ceph:v19, name=naughty_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec 15 10:40:45 compute-0 podman[103543]: 2025-12-15 10:40:45.75878352 +0000 UTC m=+0.494207341 container start 2e7dc8b37d42b6d33cc8d7b7304efe6642c0fb75c5a826b1620c61f876841919 (image=quay.io/ceph/ceph:v19, name=naughty_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:40:45 compute-0 naughty_colden[103557]: ERROR: invalid flag --daemon-type
Dec 15 10:40:45 compute-0 systemd[1]: libpod-2e7dc8b37d42b6d33cc8d7b7304efe6642c0fb75c5a826b1620c61f876841919.scope: Deactivated successfully.
Dec 15 10:40:45 compute-0 podman[103543]: 2025-12-15 10:40:45.866210791 +0000 UTC m=+0.601634642 container attach 2e7dc8b37d42b6d33cc8d7b7304efe6642c0fb75c5a826b1620c61f876841919 (image=quay.io/ceph/ceph:v19, name=naughty_colden, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 15 10:40:45 compute-0 podman[103543]: 2025-12-15 10:40:45.868390241 +0000 UTC m=+0.603814082 container died 2e7dc8b37d42b6d33cc8d7b7304efe6642c0fb75c5a826b1620c61f876841919 (image=quay.io/ceph/ceph:v19, name=naughty_colden, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 15 10:40:46 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 117 pg[10.19( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=84/84 les/c/f=85/85/0 sis=117) [0] r=0 lpr=117 pi=[84,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:40:46 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v54: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 15 10:40:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Dec 15 10:40:46 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec 15 10:40:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f97a10bd9779aeae4cb5ea60874ef0908aea9186f7c88ab699078b8d4e4f0cd-merged.mount: Deactivated successfully.
Dec 15 10:40:46 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:46 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc0043c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:46 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:46 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:46 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:46 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000037s ======
Dec 15 10:40:46 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:46.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Dec 15 10:40:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Dec 15 10:40:46 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec 15 10:40:46 compute-0 podman[103543]: 2025-12-15 10:40:46.517036863 +0000 UTC m=+1.252460684 container remove 2e7dc8b37d42b6d33cc8d7b7304efe6642c0fb75c5a826b1620c61f876841919 (image=quay.io/ceph/ceph:v19, name=naughty_colden, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 15 10:40:46 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:46 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000037s ======
Dec 15 10:40:46 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:46.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Dec 15 10:40:46 compute-0 sudo[103539]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:46 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 15 10:40:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Dec 15 10:40:46 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Dec 15 10:40:46 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 118 pg[10.19( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=84/84 les/c/f=85/85/0 sis=118) [0]/[1] r=-1 lpr=118 pi=[84,118)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:40:46 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 118 pg[10.19( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=84/84 les/c/f=85/85/0 sis=118) [0]/[1] r=-1 lpr=118 pi=[84,118)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 15 10:40:46 compute-0 systemd[1]: libpod-conmon-2e7dc8b37d42b6d33cc8d7b7304efe6642c0fb75c5a826b1620c61f876841919.scope: Deactivated successfully.
Dec 15 10:40:46 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:40:47 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:47 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003fd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Dec 15 10:40:47 compute-0 ceph-mon[74356]: pgmap v54: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 15 10:40:47 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 15 10:40:47 compute-0 ceph-mon[74356]: osdmap e118: 3 total, 3 up, 3 in
Dec 15 10:40:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Dec 15 10:40:47 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Dec 15 10:40:48 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v57: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 540 B/s rd, 0 op/s
Dec 15 10:40:48 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Dec 15 10:40:48 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec 15 10:40:48 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:48 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:48 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:48 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc0043c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:48 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:48 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:48 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:48.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:48 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:48 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:48 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:48.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:48 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Dec 15 10:40:48 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 15 10:40:48 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Dec 15 10:40:48 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Dec 15 10:40:48 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 120 pg[10.1b( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=88/88 les/c/f=89/89/0 sis=120) [0] r=0 lpr=120 pi=[88,120)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:40:48 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 120 pg[10.19( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=7 ec=59/47 lis/c=118/84 les/c/f=119/85/0 sis=120) [0] r=0 lpr=120 pi=[84,120)/1 luod=0'0 crt=54'1067 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:40:48 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 120 pg[10.19( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=7 ec=59/47 lis/c=118/84 les/c/f=119/85/0 sis=120) [0] r=0 lpr=120 pi=[84,120)/1 crt=54'1067 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:40:48 compute-0 ceph-mon[74356]: osdmap e119: 3 total, 3 up, 3 in
Dec 15 10:40:48 compute-0 ceph-mon[74356]: pgmap v57: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 540 B/s rd, 0 op/s
Dec 15 10:40:48 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec 15 10:40:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:49 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Dec 15 10:40:49 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Dec 15 10:40:49 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Dec 15 10:40:49 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 121 pg[10.1b( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=88/88 les/c/f=89/89/0 sis=121) [0]/[1] r=-1 lpr=121 pi=[88,121)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:40:49 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 121 pg[10.1b( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=88/88 les/c/f=89/89/0 sis=121) [0]/[1] r=-1 lpr=121 pi=[88,121)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 15 10:40:49 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 15 10:40:49 compute-0 ceph-mon[74356]: osdmap e120: 3 total, 3 up, 3 in
Dec 15 10:40:49 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 121 pg[10.19( v 54'1067 (0'0,54'1067] local-lis/les=120/121 n=7 ec=59/47 lis/c=118/84 les/c/f=119/85/0 sis=120) [0] r=0 lpr=120 pi=[84,120)/1 crt=54'1067 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:40:50 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v60: 353 pgs: 1 unknown, 1 peering, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Dec 15 10:40:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:50 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003ff0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:50 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:50 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:50 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000037s ======
Dec 15 10:40:50 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:50.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Dec 15 10:40:50 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:50 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000037s ======
Dec 15 10:40:50 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:50.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Dec 15 10:40:50 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Dec 15 10:40:50 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Dec 15 10:40:50 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Dec 15 10:40:50 compute-0 ceph-mon[74356]: osdmap e121: 3 total, 3 up, 3 in
Dec 15 10:40:50 compute-0 ceph-mon[74356]: pgmap v60: 353 pgs: 1 unknown, 1 peering, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Dec 15 10:40:51 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:51 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc0043c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:51 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 15 10:40:51 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Dec 15 10:40:51 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Dec 15 10:40:51 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Dec 15 10:40:51 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 123 pg[10.1b( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=2 ec=59/47 lis/c=121/88 les/c/f=122/89/0 sis=123) [0] r=0 lpr=123 pi=[88,123)/1 luod=0'0 crt=54'1067 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 15 10:40:51 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 123 pg[10.1b( v 54'1067 (0'0,54'1067] local-lis/les=0/0 n=2 ec=59/47 lis/c=121/88 les/c/f=122/89/0 sis=123) [0] r=0 lpr=123 pi=[88,123)/1 crt=54'1067 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 15 10:40:52 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v63: 353 pgs: 1 unknown, 1 peering, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Dec 15 10:40:52 compute-0 ceph-mon[74356]: osdmap e122: 3 total, 3 up, 3 in
Dec 15 10:40:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:52 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:52 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:52 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000038s ======
Dec 15 10:40:52 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:52.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Dec 15 10:40:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:52 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0004010 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:52 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:52 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:52 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:52.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:40:52] "GET /metrics HTTP/1.1" 200 48223 "" "Prometheus/2.51.0"
Dec 15 10:40:52 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:40:52] "GET /metrics HTTP/1.1" 200 48223 "" "Prometheus/2.51.0"
Dec 15 10:40:52 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Dec 15 10:40:52 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Dec 15 10:40:52 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Dec 15 10:40:52 compute-0 ceph-osd[82838]: osd.0 pg_epoch: 124 pg[10.1b( v 54'1067 (0'0,54'1067] local-lis/les=123/124 n=2 ec=59/47 lis/c=121/88 les/c/f=122/89/0 sis=123) [0] r=0 lpr=123 pi=[88,123)/1 crt=54'1067 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 15 10:40:53 compute-0 ceph-mon[74356]: osdmap e123: 3 total, 3 up, 3 in
Dec 15 10:40:53 compute-0 ceph-mon[74356]: pgmap v63: 353 pgs: 1 unknown, 1 peering, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Dec 15 10:40:53 compute-0 ceph-mon[74356]: osdmap e124: 3 total, 3 up, 3 in
Dec 15 10:40:53 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:53 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:54 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v65: 353 pgs: 1 unknown, 1 peering, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:54 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:54 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc0043c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:54 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:54 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:54 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:54 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:54 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:54.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:54 compute-0 sudo[103601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 15 10:40:54 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:54 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:54 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:54.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:54 compute-0 sudo[103601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:40:54 compute-0 sudo[103601]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:55 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:55 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0004030 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:55 compute-0 ceph-mon[74356]: pgmap v65: 353 pgs: 1 unknown, 1 peering, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:40:56 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v66: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 36 B/s, 0 objects/s recovering
Dec 15 10:40:56 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Dec 15 10:40:56 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec 15 10:40:56 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:56 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:56 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Dec 15 10:40:56 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 15 10:40:56 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Dec 15 10:40:56 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Dec 15 10:40:56 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec 15 10:40:56 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:56 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:56 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:56.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:56 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:56 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc0043c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:56 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:56 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:56 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:56.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:56 compute-0 sudo[103651]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihdbihyafmskzbnagldodlctanzbboaw ; /usr/bin/python3'
Dec 15 10:40:56 compute-0 sudo[103651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:40:56 compute-0 python3[103653]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:40:56 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:40:56 compute-0 podman[103654]: 2025-12-15 10:40:56.809823813 +0000 UTC m=+0.024722506 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:40:57 compute-0 podman[103654]: 2025-12-15 10:40:57.124570594 +0000 UTC m=+0.339469267 container create a76812a9c94a7c918e32504d929da311e0aed82e276f35aecbeab8b94c1bb82f (image=quay.io/ceph/ceph:v19, name=peaceful_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 15 10:40:57 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:57 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:57 compute-0 systemd[1]: Started libpod-conmon-a76812a9c94a7c918e32504d929da311e0aed82e276f35aecbeab8b94c1bb82f.scope.
Dec 15 10:40:57 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:40:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3867fd05b7f7617ecb4575bfcd3466fccda3ca8e414b6ca2ac1a1159b4d6ea9e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3867fd05b7f7617ecb4575bfcd3466fccda3ca8e414b6ca2ac1a1159b4d6ea9e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:40:57 compute-0 podman[103654]: 2025-12-15 10:40:57.580276099 +0000 UTC m=+0.795174762 container init a76812a9c94a7c918e32504d929da311e0aed82e276f35aecbeab8b94c1bb82f (image=quay.io/ceph/ceph:v19, name=peaceful_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:40:57 compute-0 ceph-mon[74356]: pgmap v66: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 36 B/s, 0 objects/s recovering
Dec 15 10:40:57 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 15 10:40:57 compute-0 ceph-mon[74356]: osdmap e125: 3 total, 3 up, 3 in
Dec 15 10:40:57 compute-0 podman[103654]: 2025-12-15 10:40:57.592363866 +0000 UTC m=+0.807262529 container start a76812a9c94a7c918e32504d929da311e0aed82e276f35aecbeab8b94c1bb82f (image=quay.io/ceph/ceph:v19, name=peaceful_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:40:57 compute-0 podman[103654]: 2025-12-15 10:40:57.621256407 +0000 UTC m=+0.836155150 container attach a76812a9c94a7c918e32504d929da311e0aed82e276f35aecbeab8b94c1bb82f (image=quay.io/ceph/ceph:v19, name=peaceful_ardinghelli, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:40:57 compute-0 peaceful_ardinghelli[103670]: ERROR: invalid flag --daemon-type
Dec 15 10:40:57 compute-0 systemd[1]: libpod-a76812a9c94a7c918e32504d929da311e0aed82e276f35aecbeab8b94c1bb82f.scope: Deactivated successfully.
Dec 15 10:40:57 compute-0 podman[103654]: 2025-12-15 10:40:57.666230343 +0000 UTC m=+0.881129046 container died a76812a9c94a7c918e32504d929da311e0aed82e276f35aecbeab8b94c1bb82f (image=quay.io/ceph/ceph:v19, name=peaceful_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 15 10:40:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-3867fd05b7f7617ecb4575bfcd3466fccda3ca8e414b6ca2ac1a1159b4d6ea9e-merged.mount: Deactivated successfully.
Dec 15 10:40:57 compute-0 podman[103654]: 2025-12-15 10:40:57.82132843 +0000 UTC m=+1.036227093 container remove a76812a9c94a7c918e32504d929da311e0aed82e276f35aecbeab8b94c1bb82f (image=quay.io/ceph/ceph:v19, name=peaceful_ardinghelli, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 15 10:40:57 compute-0 systemd[1]: libpod-conmon-a76812a9c94a7c918e32504d929da311e0aed82e276f35aecbeab8b94c1bb82f.scope: Deactivated successfully.
Dec 15 10:40:57 compute-0 sudo[103651]: pam_unix(sudo:session): session closed for user root
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v68: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 497 B/s rd, 0 op/s; 35 B/s, 0 objects/s recovering
Dec 15 10:40:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Dec 15 10:40:58 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [balancer INFO root] Optimize plan auto_2025-12-15_10:40:58
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [balancer INFO root] do_upmap
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [balancer INFO root] pools ['images', 'backups', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'volumes', '.mgr', 'vms', '.nfs', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control']
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:40:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 15 10:40:58 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 15 10:40:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 15 10:40:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:58 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:58 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:58 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:58 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:40:58.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:58 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:58 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:40:58 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:40:58 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:40:58.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:40:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Dec 15 10:40:59 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec 15 10:40:59 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:40:59 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:40:59 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:40:59 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 15 10:40:59 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Dec 15 10:40:59 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Dec 15 10:41:00 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v70: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 428 B/s rd, 0 op/s; 30 B/s, 0 objects/s recovering
Dec 15 10:41:00 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Dec 15 10:41:00 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec 15 10:41:00 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:00 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0004220 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:00 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:00 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000037s ======
Dec 15 10:41:00 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:00.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Dec 15 10:41:00 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:00 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0004070 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:00 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:00 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:00 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:00.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:00 compute-0 ceph-mon[74356]: pgmap v68: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 497 B/s rd, 0 op/s; 35 B/s, 0 objects/s recovering
Dec 15 10:41:00 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 15 10:41:00 compute-0 ceph-mon[74356]: osdmap e126: 3 total, 3 up, 3 in
Dec 15 10:41:00 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec 15 10:41:00 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Dec 15 10:41:01 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:01 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:01 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 15 10:41:01 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Dec 15 10:41:01 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Dec 15 10:41:01 compute-0 ceph-mon[74356]: pgmap v70: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 428 B/s rd, 0 op/s; 30 B/s, 0 objects/s recovering
Dec 15 10:41:01 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 15 10:41:01 compute-0 ceph-mon[74356]: osdmap e127: 3 total, 3 up, 3 in
Dec 15 10:41:01 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:41:01 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Dec 15 10:41:02 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v72: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:41:02 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 15 10:41:02 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 15 10:41:02 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Dec 15 10:41:02 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Dec 15 10:41:02 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:02 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc0043c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:02 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:02 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:02 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:02.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:02 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:02 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0004240 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:02 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:02 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:02 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:02.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:02 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:41:02] "GET /metrics HTTP/1.1" 200 48223 "" "Prometheus/2.51.0"
Dec 15 10:41:02 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:41:02] "GET /metrics HTTP/1.1" 200 48223 "" "Prometheus/2.51.0"
Dec 15 10:41:03 compute-0 ceph-mon[74356]: pgmap v72: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:41:03 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 15 10:41:03 compute-0 ceph-mon[74356]: osdmap e128: 3 total, 3 up, 3 in
Dec 15 10:41:03 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Dec 15 10:41:03 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:03 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0004090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:03 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 15 10:41:03 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Dec 15 10:41:03 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Dec 15 10:41:04 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v75: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:41:04 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:04 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0004090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:04 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:04 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:04 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:04.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:04 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:04 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc0043c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:04 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:04 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:04 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:04.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:04 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Dec 15 10:41:05 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Dec 15 10:41:05 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 15 10:41:05 compute-0 ceph-mon[74356]: osdmap e129: 3 total, 3 up, 3 in
Dec 15 10:41:05 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Dec 15 10:41:05 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:05 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0004240 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:06 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v77: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 662 B/s rd, 0 op/s
Dec 15 10:41:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Dec 15 10:41:06 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:06 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0004240 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:06 compute-0 ceph-mon[74356]: pgmap v75: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 15 10:41:06 compute-0 ceph-mon[74356]: osdmap e130: 3 total, 3 up, 3 in
Dec 15 10:41:06 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:06 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000038s ======
Dec 15 10:41:06 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:06.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Dec 15 10:41:06 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:06 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:06 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:06 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000037s ======
Dec 15 10:41:06 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:06.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Dec 15 10:41:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Dec 15 10:41:06 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Dec 15 10:41:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:41:06 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Dec 15 10:41:07 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Dec 15 10:41:07 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Dec 15 10:41:07 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:07 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4001a70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:07 compute-0 ceph-mon[74356]: pgmap v77: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 662 B/s rd, 0 op/s
Dec 15 10:41:07 compute-0 ceph-mon[74356]: osdmap e131: 3 total, 3 up, 3 in
Dec 15 10:41:07 compute-0 ceph-mon[74356]: osdmap e132: 3 total, 3 up, 3 in
Dec 15 10:41:07 compute-0 sudo[103737]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glhaxluapcyhopjsqfqbilxqlsovhvgk ; /usr/bin/python3'
Dec 15 10:41:07 compute-0 sudo[103737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:41:08 compute-0 python3[103739]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:41:08 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v80: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 679 B/s rd, 0 op/s
Dec 15 10:41:08 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Dec 15 10:41:08 compute-0 podman[103740]: 2025-12-15 10:41:08.129156425 +0000 UTC m=+0.022576667 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:41:08 compute-0 podman[103740]: 2025-12-15 10:41:08.270865465 +0000 UTC m=+0.164285687 container create 48f082c4e191f1a38f8248bfb3c8fc0876a154e9136539541abe692220dad3bb (image=quay.io/ceph/ceph:v19, name=admiring_golick, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:41:08 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Dec 15 10:41:08 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Dec 15 10:41:08 compute-0 systemd[1]: Started libpod-conmon-48f082c4e191f1a38f8248bfb3c8fc0876a154e9136539541abe692220dad3bb.scope.
Dec 15 10:41:08 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:41:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a7ee4043b5e1c252f6af00ca86349dd3a32d828ef168b50865581329fa0954e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:41:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a7ee4043b5e1c252f6af00ca86349dd3a32d828ef168b50865581329fa0954e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:41:08 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:08 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0004090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:08 compute-0 podman[103740]: 2025-12-15 10:41:08.373822839 +0000 UTC m=+0.267243081 container init 48f082c4e191f1a38f8248bfb3c8fc0876a154e9136539541abe692220dad3bb (image=quay.io/ceph/ceph:v19, name=admiring_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:41:08 compute-0 podman[103740]: 2025-12-15 10:41:08.381636969 +0000 UTC m=+0.275057191 container start 48f082c4e191f1a38f8248bfb3c8fc0876a154e9136539541abe692220dad3bb (image=quay.io/ceph/ceph:v19, name=admiring_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:41:08 compute-0 admiring_golick[103756]: ERROR: invalid flag --daemon-type
Dec 15 10:41:08 compute-0 systemd[1]: libpod-48f082c4e191f1a38f8248bfb3c8fc0876a154e9136539541abe692220dad3bb.scope: Deactivated successfully.
Dec 15 10:41:08 compute-0 podman[103740]: 2025-12-15 10:41:08.442838937 +0000 UTC m=+0.336259179 container attach 48f082c4e191f1a38f8248bfb3c8fc0876a154e9136539541abe692220dad3bb (image=quay.io/ceph/ceph:v19, name=admiring_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 15 10:41:08 compute-0 podman[103740]: 2025-12-15 10:41:08.443692938 +0000 UTC m=+0.337113160 container died 48f082c4e191f1a38f8248bfb3c8fc0876a154e9136539541abe692220dad3bb (image=quay.io/ceph/ceph:v19, name=admiring_golick, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 15 10:41:08 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:08 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:08 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:08.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:08 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:08 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0004090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:08 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:08 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:08 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:08.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a7ee4043b5e1c252f6af00ca86349dd3a32d828ef168b50865581329fa0954e-merged.mount: Deactivated successfully.
Dec 15 10:41:08 compute-0 podman[103740]: 2025-12-15 10:41:08.704778471 +0000 UTC m=+0.598198733 container remove 48f082c4e191f1a38f8248bfb3c8fc0876a154e9136539541abe692220dad3bb (image=quay.io/ceph/ceph:v19, name=admiring_golick, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:41:08 compute-0 sudo[103737]: pam_unix(sudo:session): session closed for user root
Dec 15 10:41:08 compute-0 systemd[1]: libpod-conmon-48f082c4e191f1a38f8248bfb3c8fc0876a154e9136539541abe692220dad3bb.scope: Deactivated successfully.
Dec 15 10:41:09 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:09 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0004090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:09 compute-0 ceph-mon[74356]: pgmap v80: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 679 B/s rd, 0 op/s
Dec 15 10:41:09 compute-0 ceph-mon[74356]: osdmap e133: 3 total, 3 up, 3 in
Dec 15 10:41:10 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v82: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 89 B/s, 4 objects/s recovering
Dec 15 10:41:10 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:10 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4001a70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:10 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:10 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000038s ======
Dec 15 10:41:10 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:10.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Dec 15 10:41:10 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:10 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0004090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:10 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:10 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:10 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:10.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:10 compute-0 ceph-mon[74356]: pgmap v82: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 89 B/s, 4 objects/s recovering
Dec 15 10:41:10 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0[101910]: logger=infra.usagestats t=2025-12-15T10:41:10.928731849Z level=info msg="Usage stats are ready to report"
Dec 15 10:41:11 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:11 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0004090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:12 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v83: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 73 B/s, 3 objects/s recovering
Dec 15 10:41:12 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:41:12 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:12 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0004090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:12 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:12 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000038s ======
Dec 15 10:41:12 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:12.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Dec 15 10:41:12 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:12 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4001a70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:12 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:12 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:12 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:12.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:12 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:41:12] "GET /metrics HTTP/1.1" 200 48223 "" "Prometheus/2.51.0"
Dec 15 10:41:12 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:41:12] "GET /metrics HTTP/1.1" 200 48223 "" "Prometheus/2.51.0"
Dec 15 10:41:13 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 15 10:41:13 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:41:13 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:13 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4001320 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:14 compute-0 ceph-mon[74356]: pgmap v83: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 73 B/s, 3 objects/s recovering
Dec 15 10:41:14 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:41:14 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v84: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 58 B/s, 2 objects/s recovering
Dec 15 10:41:14 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:14 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0004090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:14 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:14 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:14 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:14.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:14 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:14 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:14 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:14 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:14 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:14.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:14 compute-0 sudo[103797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 15 10:41:14 compute-0 sudo[103797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:41:14 compute-0 sudo[103797]: pam_unix(sudo:session): session closed for user root
Dec 15 10:41:15 compute-0 ceph-mon[74356]: pgmap v84: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 58 B/s, 2 objects/s recovering
Dec 15 10:41:15 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:15 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:16 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v85: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 344 B/s rd, 0 op/s; 49 B/s, 2 objects/s recovering
Dec 15 10:41:16 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:16 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4001320 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:16 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:16 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:16 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:16.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:16 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:16 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0004090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:16 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:16 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:16 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:16.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:17 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:41:17 compute-0 ceph-mon[74356]: pgmap v85: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 344 B/s rd, 0 op/s; 49 B/s, 2 objects/s recovering
Dec 15 10:41:17 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:17 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0004090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:18 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v86: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s; 43 B/s, 1 objects/s recovering
Dec 15 10:41:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:18 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0004090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:18 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:18 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:18 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:18.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:18 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4001a70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:18 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:18 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000037s ======
Dec 15 10:41:18 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:18.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Dec 15 10:41:18 compute-0 sudo[103849]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpgbphmkcqmokblgoisbhqerhcorcztp ; /usr/bin/python3'
Dec 15 10:41:18 compute-0 sudo[103849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:41:18 compute-0 python3[103851]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:41:19 compute-0 podman[103852]: 2025-12-15 10:41:19.018261906 +0000 UTC m=+0.044409876 container create f27ffe0283d9ed0a3fa284eff7ce0c3e68e6fe475f815d6bd6267bd3e7f4bd47 (image=quay.io/ceph/ceph:v19, name=sharp_grothendieck, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:41:19 compute-0 systemd[1]: Started libpod-conmon-f27ffe0283d9ed0a3fa284eff7ce0c3e68e6fe475f815d6bd6267bd3e7f4bd47.scope.
Dec 15 10:41:19 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:41:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ca0a4ab89382b966b01dbf3abf9eeeaee3f6f573eccfe97996744795078373/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:41:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ca0a4ab89382b966b01dbf3abf9eeeaee3f6f573eccfe97996744795078373/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:41:19 compute-0 podman[103852]: 2025-12-15 10:41:18.997113483 +0000 UTC m=+0.023261473 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:41:19 compute-0 podman[103852]: 2025-12-15 10:41:19.101784751 +0000 UTC m=+0.127932741 container init f27ffe0283d9ed0a3fa284eff7ce0c3e68e6fe475f815d6bd6267bd3e7f4bd47 (image=quay.io/ceph/ceph:v19, name=sharp_grothendieck, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 15 10:41:19 compute-0 podman[103852]: 2025-12-15 10:41:19.108610954 +0000 UTC m=+0.134758924 container start f27ffe0283d9ed0a3fa284eff7ce0c3e68e6fe475f815d6bd6267bd3e7f4bd47 (image=quay.io/ceph/ceph:v19, name=sharp_grothendieck, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 15 10:41:19 compute-0 podman[103852]: 2025-12-15 10:41:19.112057562 +0000 UTC m=+0.138205552 container attach f27ffe0283d9ed0a3fa284eff7ce0c3e68e6fe475f815d6bd6267bd3e7f4bd47 (image=quay.io/ceph/ceph:v19, name=sharp_grothendieck, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 15 10:41:19 compute-0 sharp_grothendieck[103867]: ERROR: invalid flag --daemon-type
Dec 15 10:41:19 compute-0 systemd[1]: libpod-f27ffe0283d9ed0a3fa284eff7ce0c3e68e6fe475f815d6bd6267bd3e7f4bd47.scope: Deactivated successfully.
Dec 15 10:41:19 compute-0 podman[103852]: 2025-12-15 10:41:19.157617339 +0000 UTC m=+0.183765329 container died f27ffe0283d9ed0a3fa284eff7ce0c3e68e6fe475f815d6bd6267bd3e7f4bd47 (image=quay.io/ceph/ceph:v19, name=sharp_grothendieck, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:41:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0ca0a4ab89382b966b01dbf3abf9eeeaee3f6f573eccfe97996744795078373-merged.mount: Deactivated successfully.
Dec 15 10:41:19 compute-0 ceph-mon[74356]: pgmap v86: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s; 43 B/s, 1 objects/s recovering
Dec 15 10:41:19 compute-0 podman[103852]: 2025-12-15 10:41:19.197270149 +0000 UTC m=+0.223418119 container remove f27ffe0283d9ed0a3fa284eff7ce0c3e68e6fe475f815d6bd6267bd3e7f4bd47 (image=quay.io/ceph/ceph:v19, name=sharp_grothendieck, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 15 10:41:19 compute-0 systemd[1]: libpod-conmon-f27ffe0283d9ed0a3fa284eff7ce0c3e68e6fe475f815d6bd6267bd3e7f4bd47.scope: Deactivated successfully.
Dec 15 10:41:19 compute-0 sudo[103849]: pam_unix(sudo:session): session closed for user root
Dec 15 10:41:19 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:19 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:20 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v87: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 259 B/s rd, 0 op/s; 37 B/s, 1 objects/s recovering
Dec 15 10:41:20 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:20 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:20 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:20 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000038s ======
Dec 15 10:41:20 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:20.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Dec 15 10:41:20 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:20 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:20 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:20 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:20 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:20.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:21 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0004240 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:21 compute-0 ceph-mon[74356]: pgmap v87: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 259 B/s rd, 0 op/s; 37 B/s, 1 objects/s recovering
Dec 15 10:41:22 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v88: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:41:22 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:41:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:22 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4001a70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:22 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:22 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:22 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:22.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:22 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4001a70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:22 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:22 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:22 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:22.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:41:22] "GET /metrics HTTP/1.1" 200 48223 "" "Prometheus/2.51.0"
Dec 15 10:41:22 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:41:22] "GET /metrics HTTP/1.1" 200 48223 "" "Prometheus/2.51.0"
Dec 15 10:41:23 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:23 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4001a70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:23 compute-0 ceph-mon[74356]: pgmap v88: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:41:24 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v89: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:41:24 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:24 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0004240 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:24 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:24 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000038s ======
Dec 15 10:41:24 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:24.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Dec 15 10:41:24 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:24 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:24 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:24 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000037s ======
Dec 15 10:41:24 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:24.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Dec 15 10:41:25 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:25 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40089d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:25 compute-0 ceph-mon[74356]: pgmap v89: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:41:26 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v90: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 15 10:41:26 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:26 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003e90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:26 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:26 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000037s ======
Dec 15 10:41:26 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:26.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Dec 15 10:41:26 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:26 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0004240 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:26 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:26 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:26 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:26.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:26 compute-0 ceph-mon[74356]: pgmap v90: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 15 10:41:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:41:27 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:27 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:28 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v91: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:41:28 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 15 10:41:28 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:41:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:41:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:41:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:41:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fb75bb32e50>)]
Dec 15 10:41:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 15 10:41:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:41:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fb75bb32df0>)]
Dec 15 10:41:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 15 10:41:28 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:28 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40089d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:28 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:41:28 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:28 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000037s ======
Dec 15 10:41:28 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:28.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Dec 15 10:41:28 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:28 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003e90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:28 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:28 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:28 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:28.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:29 compute-0 sudo[103933]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpdbpazdmouufqfunjmpasjrurznjkhy ; /usr/bin/python3'
Dec 15 10:41:29 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:29 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0004240 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:29 compute-0 sudo[103933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:41:29 compute-0 python3[103935]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:41:29 compute-0 ceph-mon[74356]: pgmap v91: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:41:29 compute-0 ceph-mon[74356]: log_channel(cluster) log [DBG] : mgrmap e33: compute-0.difmqj(active, since 91s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:41:29 compute-0 podman[103936]: 2025-12-15 10:41:29.4850811 +0000 UTC m=+0.023646417 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:41:29 compute-0 podman[103936]: 2025-12-15 10:41:29.75797381 +0000 UTC m=+0.296539107 container create 08984f63a0ff2402d8622fad61a0c3a1c5a7dc7c325050ef46e5a50a2beeef1d (image=quay.io/ceph/ceph:v19, name=trusting_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 15 10:41:30 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v92: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Dec 15 10:41:30 compute-0 systemd[1]: Started libpod-conmon-08984f63a0ff2402d8622fad61a0c3a1c5a7dc7c325050ef46e5a50a2beeef1d.scope.
Dec 15 10:41:30 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18ac49ee645cf96eb0d493705b63f65fbbda8c7d488da0c79dc781e32df78d1a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18ac49ee645cf96eb0d493705b63f65fbbda8c7d488da0c79dc781e32df78d1a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:41:30 compute-0 podman[103936]: 2025-12-15 10:41:30.317550892 +0000 UTC m=+0.856116219 container init 08984f63a0ff2402d8622fad61a0c3a1c5a7dc7c325050ef46e5a50a2beeef1d (image=quay.io/ceph/ceph:v19, name=trusting_roentgen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:41:30 compute-0 podman[103936]: 2025-12-15 10:41:30.330070407 +0000 UTC m=+0.868635704 container start 08984f63a0ff2402d8622fad61a0c3a1c5a7dc7c325050ef46e5a50a2beeef1d (image=quay.io/ceph/ceph:v19, name=trusting_roentgen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 15 10:41:30 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:30 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:30 compute-0 trusting_roentgen[103951]: ERROR: invalid flag --daemon-type
Dec 15 10:41:30 compute-0 systemd[1]: libpod-08984f63a0ff2402d8622fad61a0c3a1c5a7dc7c325050ef46e5a50a2beeef1d.scope: Deactivated successfully.
Dec 15 10:41:30 compute-0 podman[103936]: 2025-12-15 10:41:30.440005479 +0000 UTC m=+0.978570796 container attach 08984f63a0ff2402d8622fad61a0c3a1c5a7dc7c325050ef46e5a50a2beeef1d (image=quay.io/ceph/ceph:v19, name=trusting_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 15 10:41:30 compute-0 podman[103936]: 2025-12-15 10:41:30.441700933 +0000 UTC m=+0.980266230 container died 08984f63a0ff2402d8622fad61a0c3a1c5a7dc7c325050ef46e5a50a2beeef1d (image=quay.io/ceph/ceph:v19, name=trusting_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:41:30 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:30 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000037s ======
Dec 15 10:41:30 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:30.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Dec 15 10:41:30 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:30 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40096e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:30 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:30 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:30 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:30.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:30 compute-0 ceph-mon[74356]: mgrmap e33: compute-0.difmqj(active, since 91s), standbys: compute-1.tlqguq, compute-2.gxhwsu
Dec 15 10:41:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-18ac49ee645cf96eb0d493705b63f65fbbda8c7d488da0c79dc781e32df78d1a-merged.mount: Deactivated successfully.
Dec 15 10:41:30 compute-0 podman[103936]: 2025-12-15 10:41:30.910508201 +0000 UTC m=+1.449073498 container remove 08984f63a0ff2402d8622fad61a0c3a1c5a7dc7c325050ef46e5a50a2beeef1d (image=quay.io/ceph/ceph:v19, name=trusting_roentgen, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 15 10:41:30 compute-0 systemd[1]: libpod-conmon-08984f63a0ff2402d8622fad61a0c3a1c5a7dc7c325050ef46e5a50a2beeef1d.scope: Deactivated successfully.
Dec 15 10:41:30 compute-0 sudo[103933]: pam_unix(sudo:session): session closed for user root
Dec 15 10:41:31 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:31 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003e90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:31 compute-0 ceph-mon[74356]: pgmap v92: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Dec 15 10:41:32 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v93: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Dec 15 10:41:32 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:41:32 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:32 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003e90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:32 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:32 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000038s ======
Dec 15 10:41:32 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:32.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Dec 15 10:41:32 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:32 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:32 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:32 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000038s ======
Dec 15 10:41:32 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:32.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Dec 15 10:41:32 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:41:32] "GET /metrics HTTP/1.1" 200 48215 "" "Prometheus/2.51.0"
Dec 15 10:41:32 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:41:32] "GET /metrics HTTP/1.1" 200 48215 "" "Prometheus/2.51.0"
Dec 15 10:41:33 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:33 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40096e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:33 compute-0 ceph-mon[74356]: pgmap v93: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Dec 15 10:41:34 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v94: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Dec 15 10:41:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:34 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0004240 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:34 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:34 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:34 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:34.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:34 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003e90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:34 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:34 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:34 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:34.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:34 compute-0 sudo[103990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 15 10:41:34 compute-0 sudo[103990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:41:34 compute-0 sudo[103990]: pam_unix(sudo:session): session closed for user root
Dec 15 10:41:34 compute-0 ceph-mon[74356]: pgmap v94: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Dec 15 10:41:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:35 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:36 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v95: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 255 B/s wr, 0 op/s
Dec 15 10:41:36 compute-0 sudo[104016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:41:36 compute-0 sudo[104016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:41:36 compute-0 sudo[104016]: pam_unix(sudo:session): session closed for user root
Dec 15 10:41:36 compute-0 sudo[104041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 15 10:41:36 compute-0 sudo[104041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:41:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:36 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40096e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:36 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:36 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:41:36 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:36.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:41:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:36 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0004240 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:36 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:36 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:36 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:36.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:36 compute-0 podman[104140]: 2025-12-15 10:41:36.727494803 +0000 UTC m=+0.053902325 container exec 79ed8dc51b1852fd831764c2817cfa3745ae4937bc9015bdb965e376ab3cc58d (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec 15 10:41:36 compute-0 podman[104140]: 2025-12-15 10:41:36.858555987 +0000 UTC m=+0.184963499 container exec_died 79ed8dc51b1852fd831764c2817cfa3745ae4937bc9015bdb965e376ab3cc58d (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 15 10:41:37 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:41:37 compute-0 ceph-mon[74356]: pgmap v95: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 255 B/s wr, 0 op/s
Dec 15 10:41:37 compute-0 podman[104260]: 2025-12-15 10:41:37.278634858 +0000 UTC m=+0.048674630 container exec 4840dadc82f34f3188660955046170e50347603e400e5beeba87756514ef7863 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:41:37 compute-0 podman[104260]: 2025-12-15 10:41:37.291574757 +0000 UTC m=+0.061614539 container exec_died 4840dadc82f34f3188660955046170e50347603e400e5beeba87756514ef7863 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:41:37 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:37 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003e90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:37 compute-0 podman[104348]: 2025-12-15 10:41:37.808950114 +0000 UTC m=+0.271809955 container exec c0f38bc539f687c54b7eb01f058f3691b3a24f961ff23e54889334c42825f1f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 15 10:41:37 compute-0 podman[104368]: 2025-12-15 10:41:37.89137423 +0000 UTC m=+0.059689878 container exec_died c0f38bc539f687c54b7eb01f058f3691b3a24f961ff23e54889334c42825f1f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:41:37 compute-0 podman[104348]: 2025-12-15 10:41:37.896683688 +0000 UTC m=+0.359543519 container exec_died c0f38bc539f687c54b7eb01f058f3691b3a24f961ff23e54889334c42825f1f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:41:38 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v96: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Dec 15 10:41:38 compute-0 podman[104413]: 2025-12-15 10:41:38.33799253 +0000 UTC m=+0.291065293 container exec 55276bcc9605a59cf148bdf11fc4eb753ca401193f2349bda31c6714ea73d19e (image=quay.io/ceph/haproxy:2.3, name=ceph-77365f67-614e-5a8d-b658-640395550c79-haproxy-nfs-cephfs-compute-0-ykblqa)
Dec 15 10:41:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:38 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:38 compute-0 podman[104434]: 2025-12-15 10:41:38.40946364 +0000 UTC m=+0.053694509 container exec_died 55276bcc9605a59cf148bdf11fc4eb753ca401193f2349bda31c6714ea73d19e (image=quay.io/ceph/haproxy:2.3, name=ceph-77365f67-614e-5a8d-b658-640395550c79-haproxy-nfs-cephfs-compute-0-ykblqa)
Dec 15 10:41:38 compute-0 podman[104413]: 2025-12-15 10:41:38.487025502 +0000 UTC m=+0.440098235 container exec_died 55276bcc9605a59cf148bdf11fc4eb753ca401193f2349bda31c6714ea73d19e (image=quay.io/ceph/haproxy:2.3, name=ceph-77365f67-614e-5a8d-b658-640395550c79-haproxy-nfs-cephfs-compute-0-ykblqa)
Dec 15 10:41:38 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:38 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:38 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:38.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:38 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40096e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:38 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:38 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:38 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:38.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:38 compute-0 podman[104480]: 2025-12-15 10:41:38.699038735 +0000 UTC m=+0.062971222 container exec eb383ee2660aa4da80291a00f1592a5a221d1461ab8a14db8d6bf5db83163774 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-nfs-cephfs-compute-0-gdchmd, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, version=2.2.4, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, distribution-scope=public, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., name=keepalived)
Dec 15 10:41:38 compute-0 podman[104480]: 2025-12-15 10:41:38.71376089 +0000 UTC m=+0.077693387 container exec_died eb383ee2660aa4da80291a00f1592a5a221d1461ab8a14db8d6bf5db83163774 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-nfs-cephfs-compute-0-gdchmd, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, build-date=2023-02-22T09:23:20, release=1793, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, com.redhat.component=keepalived-container, io.buildah.version=1.28.2)
Dec 15 10:41:38 compute-0 podman[104544]: 2025-12-15 10:41:38.925072551 +0000 UTC m=+0.054351749 container exec 89a20608330788e4c3363bfd799e4feb78e7548f505117a7fc750d8eecfef10f (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:41:38 compute-0 podman[104544]: 2025-12-15 10:41:38.949793443 +0000 UTC m=+0.079072631 container exec_died 89a20608330788e4c3363bfd799e4feb78e7548f505117a7fc750d8eecfef10f (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:41:39 compute-0 podman[104616]: 2025-12-15 10:41:39.152386607 +0000 UTC m=+0.054144572 container exec 904de0ff1c362d9a2eedc2ea5dd43957c068393257172a115a9781fe3109347c (image=quay.io/ceph/grafana:10.4.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:41:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:39 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0004240 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:39 compute-0 ceph-mon[74356]: pgmap v96: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Dec 15 10:41:39 compute-0 podman[104616]: 2025-12-15 10:41:39.344562174 +0000 UTC m=+0.246320139 container exec_died 904de0ff1c362d9a2eedc2ea5dd43957c068393257172a115a9781fe3109347c (image=quay.io/ceph/grafana:10.4.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:41:39 compute-0 podman[104732]: 2025-12-15 10:41:39.676128546 +0000 UTC m=+0.044672183 container exec 811bf452ef3ce2cd1829f815f500a947c1c9dba459a203c1e31a5cfea42f0cfb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:41:39 compute-0 podman[104732]: 2025-12-15 10:41:39.704693569 +0000 UTC m=+0.073237216 container exec_died 811bf452ef3ce2cd1829f815f500a947c1c9dba459a203c1e31a5cfea42f0cfb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:41:39 compute-0 sudo[104041]: pam_unix(sudo:session): session closed for user root
Dec 15 10:41:39 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:41:39 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:41:39 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:41:39 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:41:39 compute-0 sudo[104772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:41:39 compute-0 sudo[104772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:41:39 compute-0 sudo[104772]: pam_unix(sudo:session): session closed for user root
Dec 15 10:41:39 compute-0 sudo[104797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 15 10:41:39 compute-0 sudo[104797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:41:40 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v97: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Dec 15 10:41:40 compute-0 sudo[104797]: pam_unix(sudo:session): session closed for user root
Dec 15 10:41:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:40 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003e90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:41:40 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:41:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 15 10:41:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 15 10:41:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 15 10:41:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:41:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 15 10:41:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:41:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 15 10:41:40 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 15 10:41:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 15 10:41:40 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 15 10:41:40 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:41:40 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:41:40 compute-0 sudo[104855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:41:40 compute-0 sudo[104855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:41:40 compute-0 sudo[104855]: pam_unix(sudo:session): session closed for user root
Dec 15 10:41:40 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:40 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:41:40 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:40.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:41:40 compute-0 sudo[104880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 77365f67-614e-5a8d-b658-640395550c79 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 15 10:41:40 compute-0 sudo[104880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:41:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:40 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:40 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:40 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:40 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:40.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:40 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:41:40 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:41:40 compute-0 ceph-mon[74356]: pgmap v97: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Dec 15 10:41:40 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:41:40 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 15 10:41:40 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:41:40 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:41:40 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 15 10:41:40 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 15 10:41:40 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:41:40 compute-0 podman[104947]: 2025-12-15 10:41:40.923034008 +0000 UTC m=+0.048437692 container create 6cce327ce27367a2995507d6bdfacbf4e02665a90671039853d66f7554d5a42a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:41:40 compute-0 systemd[1]: Started libpod-conmon-6cce327ce27367a2995507d6bdfacbf4e02665a90671039853d66f7554d5a42a.scope.
Dec 15 10:41:40 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:41:40 compute-0 podman[104947]: 2025-12-15 10:41:40.894029761 +0000 UTC m=+0.019433465 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:41:41 compute-0 podman[104947]: 2025-12-15 10:41:41.002077768 +0000 UTC m=+0.127481462 container init 6cce327ce27367a2995507d6bdfacbf4e02665a90671039853d66f7554d5a42a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_wilbur, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:41:41 compute-0 podman[104947]: 2025-12-15 10:41:41.009712409 +0000 UTC m=+0.135116123 container start 6cce327ce27367a2995507d6bdfacbf4e02665a90671039853d66f7554d5a42a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:41:41 compute-0 epic_wilbur[104963]: 167 167
Dec 15 10:41:41 compute-0 systemd[1]: libpod-6cce327ce27367a2995507d6bdfacbf4e02665a90671039853d66f7554d5a42a.scope: Deactivated successfully.
Dec 15 10:41:41 compute-0 podman[104947]: 2025-12-15 10:41:41.01576621 +0000 UTC m=+0.141169894 container attach 6cce327ce27367a2995507d6bdfacbf4e02665a90671039853d66f7554d5a42a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 15 10:41:41 compute-0 podman[104947]: 2025-12-15 10:41:41.016228024 +0000 UTC m=+0.141631708 container died 6cce327ce27367a2995507d6bdfacbf4e02665a90671039853d66f7554d5a42a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_wilbur, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:41:41 compute-0 sudo[104989]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqwdpkubbnpwcoseudqiknulfjozxitw ; /usr/bin/python3'
Dec 15 10:41:41 compute-0 sudo[104989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:41:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdbb058e316d7b5dcd168fd07e53d2fd8edc6f0d49b9c3477a77478bf8a31536-merged.mount: Deactivated successfully.
Dec 15 10:41:41 compute-0 podman[104947]: 2025-12-15 10:41:41.058308745 +0000 UTC m=+0.183712419 container remove 6cce327ce27367a2995507d6bdfacbf4e02665a90671039853d66f7554d5a42a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_wilbur, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 15 10:41:41 compute-0 systemd[1]: libpod-conmon-6cce327ce27367a2995507d6bdfacbf4e02665a90671039853d66f7554d5a42a.scope: Deactivated successfully.
Dec 15 10:41:41 compute-0 python3[104997]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:41:41 compute-0 podman[105012]: 2025-12-15 10:41:41.208839934 +0000 UTC m=+0.050193067 container create e28564b2e245be469e0dd4d0396fa5cd584be05f1abd0a3c3c96a0a82b9ef75c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:41:41 compute-0 podman[105019]: 2025-12-15 10:41:41.230746817 +0000 UTC m=+0.058388197 container create 70d4447f81d981184ab7c7ee2daf408fe7f75c591904de9155c7e6d168d7a381 (image=quay.io/ceph/ceph:v19, name=youthful_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:41:41 compute-0 systemd[1]: Started libpod-conmon-e28564b2e245be469e0dd4d0396fa5cd584be05f1abd0a3c3c96a0a82b9ef75c.scope.
Dec 15 10:41:41 compute-0 systemd[1]: Started libpod-conmon-70d4447f81d981184ab7c7ee2daf408fe7f75c591904de9155c7e6d168d7a381.scope.
Dec 15 10:41:41 compute-0 podman[105012]: 2025-12-15 10:41:41.181303894 +0000 UTC m=+0.022657007 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:41:41 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:41:41 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb0939f800b8a802a9f847a17eaeccea6404896a66a8c83fb97b1eeffc1c118/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb0939f800b8a802a9f847a17eaeccea6404896a66a8c83fb97b1eeffc1c118/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb0939f800b8a802a9f847a17eaeccea6404896a66a8c83fb97b1eeffc1c118/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13a7f5fd5ba873f45d5b89990771b89ceff85668de257e486d28e2183513a004/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13a7f5fd5ba873f45d5b89990771b89ceff85668de257e486d28e2183513a004/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb0939f800b8a802a9f847a17eaeccea6404896a66a8c83fb97b1eeffc1c118/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb0939f800b8a802a9f847a17eaeccea6404896a66a8c83fb97b1eeffc1c118/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:41:41 compute-0 podman[105019]: 2025-12-15 10:41:41.201261534 +0000 UTC m=+0.028902944 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:41:41 compute-0 podman[105012]: 2025-12-15 10:41:41.303648912 +0000 UTC m=+0.145002015 container init e28564b2e245be469e0dd4d0396fa5cd584be05f1abd0a3c3c96a0a82b9ef75c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 15 10:41:41 compute-0 podman[105019]: 2025-12-15 10:41:41.306820592 +0000 UTC m=+0.134461992 container init 70d4447f81d981184ab7c7ee2daf408fe7f75c591904de9155c7e6d168d7a381 (image=quay.io/ceph/ceph:v19, name=youthful_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 15 10:41:41 compute-0 podman[105012]: 2025-12-15 10:41:41.311006664 +0000 UTC m=+0.152359757 container start e28564b2e245be469e0dd4d0396fa5cd584be05f1abd0a3c3c96a0a82b9ef75c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_dubinsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 15 10:41:41 compute-0 podman[105019]: 2025-12-15 10:41:41.314380761 +0000 UTC m=+0.142022141 container start 70d4447f81d981184ab7c7ee2daf408fe7f75c591904de9155c7e6d168d7a381 (image=quay.io/ceph/ceph:v19, name=youthful_kalam, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:41:41 compute-0 podman[105012]: 2025-12-15 10:41:41.314438013 +0000 UTC m=+0.155791226 container attach e28564b2e245be469e0dd4d0396fa5cd584be05f1abd0a3c3c96a0a82b9ef75c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 15 10:41:41 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:41 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0004240 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:41 compute-0 podman[105019]: 2025-12-15 10:41:41.317535271 +0000 UTC m=+0.145176651 container attach 70d4447f81d981184ab7c7ee2daf408fe7f75c591904de9155c7e6d168d7a381 (image=quay.io/ceph/ceph:v19, name=youthful_kalam, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Dec 15 10:41:41 compute-0 youthful_kalam[105046]: ERROR: invalid flag --daemon-type
Dec 15 10:41:41 compute-0 systemd[1]: libpod-70d4447f81d981184ab7c7ee2daf408fe7f75c591904de9155c7e6d168d7a381.scope: Deactivated successfully.
Dec 15 10:41:41 compute-0 podman[105019]: 2025-12-15 10:41:41.365835798 +0000 UTC m=+0.193477178 container died 70d4447f81d981184ab7c7ee2daf408fe7f75c591904de9155c7e6d168d7a381 (image=quay.io/ceph/ceph:v19, name=youthful_kalam, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:41:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-13a7f5fd5ba873f45d5b89990771b89ceff85668de257e486d28e2183513a004-merged.mount: Deactivated successfully.
Dec 15 10:41:41 compute-0 podman[105019]: 2025-12-15 10:41:41.408261959 +0000 UTC m=+0.235903339 container remove 70d4447f81d981184ab7c7ee2daf408fe7f75c591904de9155c7e6d168d7a381 (image=quay.io/ceph/ceph:v19, name=youthful_kalam, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:41:41 compute-0 systemd[1]: libpod-conmon-70d4447f81d981184ab7c7ee2daf408fe7f75c591904de9155c7e6d168d7a381.scope: Deactivated successfully.
Dec 15 10:41:41 compute-0 sudo[104989]: pam_unix(sudo:session): session closed for user root
Dec 15 10:41:41 compute-0 flamboyant_dubinsky[105044]: --> passed data devices: 0 physical, 1 LVM
Dec 15 10:41:41 compute-0 flamboyant_dubinsky[105044]: --> All data devices are unavailable
Dec 15 10:41:41 compute-0 systemd[1]: libpod-e28564b2e245be469e0dd4d0396fa5cd584be05f1abd0a3c3c96a0a82b9ef75c.scope: Deactivated successfully.
Dec 15 10:41:41 compute-0 podman[105012]: 2025-12-15 10:41:41.653621546 +0000 UTC m=+0.494974649 container died e28564b2e245be469e0dd4d0396fa5cd584be05f1abd0a3c3c96a0a82b9ef75c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_dubinsky, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:41:42 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v98: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:41:42 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:41:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-eeb0939f800b8a802a9f847a17eaeccea6404896a66a8c83fb97b1eeffc1c118-merged.mount: Deactivated successfully.
Dec 15 10:41:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:42 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40096e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:42 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:42 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:41:42 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:42.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:41:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:42 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003e90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:42 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:42 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:42 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:42.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:42 compute-0 podman[105012]: 2025-12-15 10:41:42.668519113 +0000 UTC m=+1.509872216 container remove e28564b2e245be469e0dd4d0396fa5cd584be05f1abd0a3c3c96a0a82b9ef75c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 15 10:41:42 compute-0 sudo[104880]: pam_unix(sudo:session): session closed for user root
Dec 15 10:41:42 compute-0 systemd[1]: libpod-conmon-e28564b2e245be469e0dd4d0396fa5cd584be05f1abd0a3c3c96a0a82b9ef75c.scope: Deactivated successfully.
Dec 15 10:41:42 compute-0 sudo[105104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:41:42 compute-0 sudo[105104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:41:42 compute-0 sudo[105104]: pam_unix(sudo:session): session closed for user root
Dec 15 10:41:42 compute-0 sudo[105129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 77365f67-614e-5a8d-b658-640395550c79 -- lvm list --format json
Dec 15 10:41:42 compute-0 sudo[105129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:41:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:41:42] "GET /metrics HTTP/1.1" 200 48215 "" "Prometheus/2.51.0"
Dec 15 10:41:42 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:41:42] "GET /metrics HTTP/1.1" 200 48215 "" "Prometheus/2.51.0"
Dec 15 10:41:43 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 15 10:41:43 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:41:43 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:43 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:43 compute-0 podman[105194]: 2025-12-15 10:41:43.224835392 +0000 UTC m=+0.033392717 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:41:43 compute-0 podman[105194]: 2025-12-15 10:41:43.387659539 +0000 UTC m=+0.196216884 container create e78a98dd79398196615205b1d1c93862db2611ee33c0cbc0620eacdd4dd69744 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_carver, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 15 10:41:43 compute-0 systemd[1]: Started libpod-conmon-e78a98dd79398196615205b1d1c93862db2611ee33c0cbc0620eacdd4dd69744.scope.
Dec 15 10:41:43 compute-0 ceph-mon[74356]: pgmap v98: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:41:43 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:41:43 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:41:43 compute-0 podman[105194]: 2025-12-15 10:41:43.863270606 +0000 UTC m=+0.671828001 container init e78a98dd79398196615205b1d1c93862db2611ee33c0cbc0620eacdd4dd69744 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_carver, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 15 10:41:43 compute-0 podman[105194]: 2025-12-15 10:41:43.873292183 +0000 UTC m=+0.681849488 container start e78a98dd79398196615205b1d1c93862db2611ee33c0cbc0620eacdd4dd69744 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_carver, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 15 10:41:43 compute-0 wizardly_carver[105211]: 167 167
Dec 15 10:41:43 compute-0 systemd[1]: libpod-e78a98dd79398196615205b1d1c93862db2611ee33c0cbc0620eacdd4dd69744.scope: Deactivated successfully.
Dec 15 10:41:44 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v99: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:41:44 compute-0 podman[105194]: 2025-12-15 10:41:44.156256559 +0000 UTC m=+0.964813874 container attach e78a98dd79398196615205b1d1c93862db2611ee33c0cbc0620eacdd4dd69744 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_carver, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 15 10:41:44 compute-0 podman[105194]: 2025-12-15 10:41:44.157669973 +0000 UTC m=+0.966227308 container died e78a98dd79398196615205b1d1c93862db2611ee33c0cbc0620eacdd4dd69744 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_carver, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 15 10:41:44 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:44 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0004240 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:44 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:44 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:41:44 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:44.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:41:44 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:44 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40096e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:44 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:44 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:44 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:44.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-82bcdd725da2710cf5b5cdf299c65fe2c3c3530655d9c4a4273b29a29b84b9df-merged.mount: Deactivated successfully.
Dec 15 10:41:45 compute-0 ceph-mon[74356]: pgmap v99: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:41:45 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:45 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003e90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:46 compute-0 podman[105194]: 2025-12-15 10:41:46.091272966 +0000 UTC m=+2.899830271 container remove e78a98dd79398196615205b1d1c93862db2611ee33c0cbc0620eacdd4dd69744 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_carver, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:41:46 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v100: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 15 10:41:46 compute-0 systemd[1]: libpod-conmon-e78a98dd79398196615205b1d1c93862db2611ee33c0cbc0620eacdd4dd69744.scope: Deactivated successfully.
Dec 15 10:41:46 compute-0 podman[105238]: 2025-12-15 10:41:46.239165672 +0000 UTC m=+0.022163242 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:41:46 compute-0 podman[105238]: 2025-12-15 10:41:46.33810154 +0000 UTC m=+0.121099090 container create 54e2c810d8ecdb7647c96789e2d7ed15fb260ebc4c7ac4ce736ff2ff63353de2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_bhabha, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 15 10:41:46 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:46 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc002140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:46 compute-0 systemd[1]: Started libpod-conmon-54e2c810d8ecdb7647c96789e2d7ed15fb260ebc4c7ac4ce736ff2ff63353de2.scope.
Dec 15 10:41:46 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:41:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f85495035cfc725bfd7d05a7f0daeb956ac598c819c65c0fdd9d1ed8f7f7a3dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:41:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f85495035cfc725bfd7d05a7f0daeb956ac598c819c65c0fdd9d1ed8f7f7a3dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:41:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f85495035cfc725bfd7d05a7f0daeb956ac598c819c65c0fdd9d1ed8f7f7a3dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:41:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f85495035cfc725bfd7d05a7f0daeb956ac598c819c65c0fdd9d1ed8f7f7a3dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:41:46 compute-0 podman[105238]: 2025-12-15 10:41:46.434015361 +0000 UTC m=+0.217012941 container init 54e2c810d8ecdb7647c96789e2d7ed15fb260ebc4c7ac4ce736ff2ff63353de2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_bhabha, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:41:46 compute-0 podman[105238]: 2025-12-15 10:41:46.445774813 +0000 UTC m=+0.228772363 container start 54e2c810d8ecdb7647c96789e2d7ed15fb260ebc4c7ac4ce736ff2ff63353de2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_bhabha, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 15 10:41:46 compute-0 podman[105238]: 2025-12-15 10:41:46.450563845 +0000 UTC m=+0.233561425 container attach 54e2c810d8ecdb7647c96789e2d7ed15fb260ebc4c7ac4ce736ff2ff63353de2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_bhabha, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 15 10:41:46 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:46 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:46 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:46.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:46 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:46 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc002140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:46 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:46 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:41:46 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:46.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:41:46 compute-0 practical_bhabha[105254]: {
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:     "0": [
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:         {
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:             "devices": [
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:                 "/dev/loop3"
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:             ],
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:             "lv_name": "ceph_lv0",
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:             "lv_size": "21470642176",
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MCSOGR-6V69-UVGq-8FpK-DBjb-LyDE-FLUPxJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=77365f67-614e-5a8d-b658-640395550c79,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7690eca0-4e87-4157-a045-1912448da925,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:             "lv_uuid": "MCSOGR-6V69-UVGq-8FpK-DBjb-LyDE-FLUPxJ",
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:             "name": "ceph_lv0",
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:             "tags": {
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:                 "ceph.block_uuid": "MCSOGR-6V69-UVGq-8FpK-DBjb-LyDE-FLUPxJ",
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:                 "ceph.cephx_lockbox_secret": "",
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:                 "ceph.cluster_fsid": "77365f67-614e-5a8d-b658-640395550c79",
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:                 "ceph.cluster_name": "ceph",
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:                 "ceph.crush_device_class": "",
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:                 "ceph.encrypted": "0",
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:                 "ceph.osd_fsid": "7690eca0-4e87-4157-a045-1912448da925",
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:                 "ceph.osd_id": "0",
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:                 "ceph.type": "block",
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:                 "ceph.vdo": "0",
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:                 "ceph.with_tpm": "0"
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:             },
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:             "type": "block",
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:             "vg_name": "ceph_vg0"
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:         }
Dec 15 10:41:46 compute-0 practical_bhabha[105254]:     ]
Dec 15 10:41:46 compute-0 practical_bhabha[105254]: }
Dec 15 10:41:46 compute-0 systemd[1]: libpod-54e2c810d8ecdb7647c96789e2d7ed15fb260ebc4c7ac4ce736ff2ff63353de2.scope: Deactivated successfully.
Dec 15 10:41:46 compute-0 podman[105238]: 2025-12-15 10:41:46.747001657 +0000 UTC m=+0.529999217 container died 54e2c810d8ecdb7647c96789e2d7ed15fb260ebc4c7ac4ce736ff2ff63353de2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_bhabha, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:41:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-f85495035cfc725bfd7d05a7f0daeb956ac598c819c65c0fdd9d1ed8f7f7a3dc-merged.mount: Deactivated successfully.
Dec 15 10:41:46 compute-0 podman[105238]: 2025-12-15 10:41:46.803635867 +0000 UTC m=+0.586633417 container remove 54e2c810d8ecdb7647c96789e2d7ed15fb260ebc4c7ac4ce736ff2ff63353de2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 15 10:41:46 compute-0 systemd[1]: libpod-conmon-54e2c810d8ecdb7647c96789e2d7ed15fb260ebc4c7ac4ce736ff2ff63353de2.scope: Deactivated successfully.
Dec 15 10:41:46 compute-0 sudo[105129]: pam_unix(sudo:session): session closed for user root
Dec 15 10:41:46 compute-0 ceph-mon[74356]: pgmap v100: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 15 10:41:46 compute-0 sudo[105277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:41:46 compute-0 sudo[105277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:41:46 compute-0 sudo[105277]: pam_unix(sudo:session): session closed for user root
Dec 15 10:41:46 compute-0 sudo[105302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 77365f67-614e-5a8d-b658-640395550c79 -- raw list --format json
Dec 15 10:41:46 compute-0 sudo[105302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:41:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:41:47 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:47 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40096e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:47 compute-0 podman[105368]: 2025-12-15 10:41:47.344310571 +0000 UTC m=+0.040769059 container create 1e314fb6e48f07883c1ace62d63696a50e395990d35aec8cabbd0a94722c0b9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 15 10:41:47 compute-0 systemd[1]: Started libpod-conmon-1e314fb6e48f07883c1ace62d63696a50e395990d35aec8cabbd0a94722c0b9a.scope.
Dec 15 10:41:47 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:41:47 compute-0 podman[105368]: 2025-12-15 10:41:47.414951735 +0000 UTC m=+0.111410253 container init 1e314fb6e48f07883c1ace62d63696a50e395990d35aec8cabbd0a94722c0b9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_williamson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:41:47 compute-0 podman[105368]: 2025-12-15 10:41:47.325425005 +0000 UTC m=+0.021883523 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:41:47 compute-0 podman[105368]: 2025-12-15 10:41:47.421385399 +0000 UTC m=+0.117843897 container start 1e314fb6e48f07883c1ace62d63696a50e395990d35aec8cabbd0a94722c0b9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_williamson, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 15 10:41:47 compute-0 podman[105368]: 2025-12-15 10:41:47.424402304 +0000 UTC m=+0.120860802 container attach 1e314fb6e48f07883c1ace62d63696a50e395990d35aec8cabbd0a94722c0b9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec 15 10:41:47 compute-0 zealous_williamson[105385]: 167 167
Dec 15 10:41:47 compute-0 systemd[1]: libpod-1e314fb6e48f07883c1ace62d63696a50e395990d35aec8cabbd0a94722c0b9a.scope: Deactivated successfully.
Dec 15 10:41:47 compute-0 podman[105368]: 2025-12-15 10:41:47.426558652 +0000 UTC m=+0.123017150 container died 1e314fb6e48f07883c1ace62d63696a50e395990d35aec8cabbd0a94722c0b9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_williamson, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 15 10:41:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-95a393426982cb5fcee72ae382f8fe58daeca5e3e37dd82de7dec1853b1c6982-merged.mount: Deactivated successfully.
Dec 15 10:41:47 compute-0 podman[105368]: 2025-12-15 10:41:47.460841195 +0000 UTC m=+0.157299693 container remove 1e314fb6e48f07883c1ace62d63696a50e395990d35aec8cabbd0a94722c0b9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 15 10:41:47 compute-0 systemd[1]: libpod-conmon-1e314fb6e48f07883c1ace62d63696a50e395990d35aec8cabbd0a94722c0b9a.scope: Deactivated successfully.
Dec 15 10:41:47 compute-0 podman[105409]: 2025-12-15 10:41:47.589267936 +0000 UTC m=+0.021430489 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:41:47 compute-0 podman[105409]: 2025-12-15 10:41:47.734457887 +0000 UTC m=+0.166620420 container create 64c228c2266a893dcb9a2c54f3020f6cdb2e9648e779be66b9c11f787c3ab5aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:41:47 compute-0 systemd[1]: Started libpod-conmon-64c228c2266a893dcb9a2c54f3020f6cdb2e9648e779be66b9c11f787c3ab5aa.scope.
Dec 15 10:41:47 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:41:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b6eef759e7a6567df84ae3f08bd21225c56e5164a6f4147a8939fb0bc4576aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:41:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b6eef759e7a6567df84ae3f08bd21225c56e5164a6f4147a8939fb0bc4576aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:41:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b6eef759e7a6567df84ae3f08bd21225c56e5164a6f4147a8939fb0bc4576aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:41:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b6eef759e7a6567df84ae3f08bd21225c56e5164a6f4147a8939fb0bc4576aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:41:48 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v101: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:41:48 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:48 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003e90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:48 compute-0 podman[105409]: 2025-12-15 10:41:48.389489276 +0000 UTC m=+0.821651829 container init 64c228c2266a893dcb9a2c54f3020f6cdb2e9648e779be66b9c11f787c3ab5aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 15 10:41:48 compute-0 podman[105409]: 2025-12-15 10:41:48.396409414 +0000 UTC m=+0.828571947 container start 64c228c2266a893dcb9a2c54f3020f6cdb2e9648e779be66b9c11f787c3ab5aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_snyder, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 15 10:41:48 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:48 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:41:48 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:48.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:41:48 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:48 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc002140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:48 compute-0 podman[105409]: 2025-12-15 10:41:48.610526834 +0000 UTC m=+1.042689457 container attach 64c228c2266a893dcb9a2c54f3020f6cdb2e9648e779be66b9c11f787c3ab5aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 15 10:41:48 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:48 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:48 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:48.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:48 compute-0 ceph-mon[74356]: pgmap v101: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:41:49 compute-0 lvm[105501]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 15 10:41:49 compute-0 lvm[105501]: VG ceph_vg0 finished
Dec 15 10:41:49 compute-0 strange_snyder[105426]: {}
Dec 15 10:41:49 compute-0 systemd[1]: libpod-64c228c2266a893dcb9a2c54f3020f6cdb2e9648e779be66b9c11f787c3ab5aa.scope: Deactivated successfully.
Dec 15 10:41:49 compute-0 systemd[1]: libpod-64c228c2266a893dcb9a2c54f3020f6cdb2e9648e779be66b9c11f787c3ab5aa.scope: Consumed 1.113s CPU time.
Dec 15 10:41:49 compute-0 podman[105409]: 2025-12-15 10:41:49.101022062 +0000 UTC m=+1.533184595 container died 64c228c2266a893dcb9a2c54f3020f6cdb2e9648e779be66b9c11f787c3ab5aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_snyder, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:41:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b6eef759e7a6567df84ae3f08bd21225c56e5164a6f4147a8939fb0bc4576aa-merged.mount: Deactivated successfully.
Dec 15 10:41:49 compute-0 podman[105409]: 2025-12-15 10:41:49.149168603 +0000 UTC m=+1.581331136 container remove 64c228c2266a893dcb9a2c54f3020f6cdb2e9648e779be66b9c11f787c3ab5aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:41:49 compute-0 systemd[1]: libpod-conmon-64c228c2266a893dcb9a2c54f3020f6cdb2e9648e779be66b9c11f787c3ab5aa.scope: Deactivated successfully.
Dec 15 10:41:49 compute-0 sudo[105302]: pam_unix(sudo:session): session closed for user root
Dec 15 10:41:49 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:41:49 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:41:49 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:41:49 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:41:49 compute-0 sudo[105518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 15 10:41:49 compute-0 sudo[105518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:41:49 compute-0 sudo[105518]: pam_unix(sudo:session): session closed for user root
Dec 15 10:41:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc002140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:50 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v102: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:41:50 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:41:50 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:41:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:50 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40096e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:50 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:50 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:41:50 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:50.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:41:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:50 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003e90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:50 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:50 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:50 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:50.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:51 compute-0 ceph-mon[74356]: pgmap v102: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:41:51 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:51 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc002140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:51 compute-0 sudo[105569]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xahqubbiuftudpbzesqvlgtrhwnqezoq ; /usr/bin/python3'
Dec 15 10:41:51 compute-0 sudo[105569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:41:51 compute-0 python3[105571]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:41:51 compute-0 podman[105572]: 2025-12-15 10:41:51.721038856 +0000 UTC m=+0.047272206 container create af5a38aeb429f5082e8cd92f52cbc965550e6b468e45bb91f019b7bd032aca6d (image=quay.io/ceph/ceph:v19, name=serene_jackson, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:41:51 compute-0 systemd[1]: Started libpod-conmon-af5a38aeb429f5082e8cd92f52cbc965550e6b468e45bb91f019b7bd032aca6d.scope.
Dec 15 10:41:51 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:41:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c416dcac0d71cec3db6926abf73d750d7b04206596edc5ef1fa7816c7e29e62b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:41:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c416dcac0d71cec3db6926abf73d750d7b04206596edc5ef1fa7816c7e29e62b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:41:51 compute-0 podman[105572]: 2025-12-15 10:41:51.699897787 +0000 UTC m=+0.026130907 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:41:51 compute-0 podman[105572]: 2025-12-15 10:41:51.803990758 +0000 UTC m=+0.130223878 container init af5a38aeb429f5082e8cd92f52cbc965550e6b468e45bb91f019b7bd032aca6d (image=quay.io/ceph/ceph:v19, name=serene_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 15 10:41:51 compute-0 podman[105572]: 2025-12-15 10:41:51.814101118 +0000 UTC m=+0.140334218 container start af5a38aeb429f5082e8cd92f52cbc965550e6b468e45bb91f019b7bd032aca6d (image=quay.io/ceph/ceph:v19, name=serene_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:41:51 compute-0 podman[105572]: 2025-12-15 10:41:51.818251329 +0000 UTC m=+0.144484599 container attach af5a38aeb429f5082e8cd92f52cbc965550e6b468e45bb91f019b7bd032aca6d (image=quay.io/ceph/ceph:v19, name=serene_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:41:51 compute-0 serene_jackson[105589]: ERROR: invalid flag --daemon-type
Dec 15 10:41:51 compute-0 systemd[1]: libpod-af5a38aeb429f5082e8cd92f52cbc965550e6b468e45bb91f019b7bd032aca6d.scope: Deactivated successfully.
Dec 15 10:41:51 compute-0 podman[105572]: 2025-12-15 10:41:51.877962447 +0000 UTC m=+0.204195567 container died af5a38aeb429f5082e8cd92f52cbc965550e6b468e45bb91f019b7bd032aca6d (image=quay.io/ceph/ceph:v19, name=serene_jackson, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 15 10:41:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-c416dcac0d71cec3db6926abf73d750d7b04206596edc5ef1fa7816c7e29e62b-merged.mount: Deactivated successfully.
Dec 15 10:41:51 compute-0 podman[105572]: 2025-12-15 10:41:51.923407613 +0000 UTC m=+0.249640703 container remove af5a38aeb429f5082e8cd92f52cbc965550e6b468e45bb91f019b7bd032aca6d (image=quay.io/ceph/ceph:v19, name=serene_jackson, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:41:51 compute-0 systemd[1]: libpod-conmon-af5a38aeb429f5082e8cd92f52cbc965550e6b468e45bb91f019b7bd032aca6d.scope: Deactivated successfully.
Dec 15 10:41:51 compute-0 sudo[105569]: pam_unix(sudo:session): session closed for user root
Dec 15 10:41:52 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v103: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:41:52 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:41:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:52 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003e90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:52 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:52 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:41:52 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:52.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:41:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:52 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40096e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:52 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:52 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:52 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:52.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:41:52] "GET /metrics HTTP/1.1" 200 48219 "" "Prometheus/2.51.0"
Dec 15 10:41:52 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:41:52] "GET /metrics HTTP/1.1" 200 48219 "" "Prometheus/2.51.0"
Dec 15 10:41:53 compute-0 ceph-mon[74356]: pgmap v103: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:41:53 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:53 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:54 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v104: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:41:54 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:54 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e00043e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:54 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:54 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:54 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:54.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:54 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:54 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003e90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:54 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:54 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:54 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:54.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:54 compute-0 sudo[105625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 15 10:41:54 compute-0 sudo[105625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:41:54 compute-0 sudo[105625]: pam_unix(sudo:session): session closed for user root
Dec 15 10:41:55 compute-0 ceph-mon[74356]: pgmap v104: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:41:55.318633) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795315318674, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2602, "num_deletes": 252, "total_data_size": 7079182, "memory_usage": 7445872, "flush_reason": "Manual Compaction"}
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Dec 15 10:41:55 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:55 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40096e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795315368807, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 6628739, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8141, "largest_seqno": 10742, "table_properties": {"data_size": 6616413, "index_size": 7924, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 28705, "raw_average_key_size": 21, "raw_value_size": 6590278, "raw_average_value_size": 4996, "num_data_blocks": 347, "num_entries": 1319, "num_filter_entries": 1319, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765795191, "oldest_key_time": 1765795191, "file_creation_time": 1765795315, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d36e3d93-cef6-4482-9c71-0054ae87e0c9", "db_session_id": "8WPMBXYVT9DSSQWRN3T3", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 50351 microseconds, and 15810 cpu microseconds.
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:41:55.368982) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 6628739 bytes OK
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:41:55.369048) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:41:55.371245) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:41:55.371265) EVENT_LOG_v1 {"time_micros": 1765795315371259, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:41:55.371285) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 7067460, prev total WAL file size 7067460, number of live WAL files 2.
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:41:55.373538) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(6473KB)], [23(10MB)]
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795315373634, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 17889926, "oldest_snapshot_seqno": -1}
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 4013 keys, 13927663 bytes, temperature: kUnknown
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795315475631, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 13927663, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13895486, "index_size": 21059, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 102337, "raw_average_key_size": 25, "raw_value_size": 13816688, "raw_average_value_size": 3442, "num_data_blocks": 904, "num_entries": 4013, "num_filter_entries": 4013, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765794889, "oldest_key_time": 0, "file_creation_time": 1765795315, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d36e3d93-cef6-4482-9c71-0054ae87e0c9", "db_session_id": "8WPMBXYVT9DSSQWRN3T3", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:41:55.476036) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 13927663 bytes
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:41:55.477560) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.4 rd, 136.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(6.3, 10.7 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(4.8) write-amplify(2.1) OK, records in: 4549, records dropped: 536 output_compression: NoCompression
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:41:55.477594) EVENT_LOG_v1 {"time_micros": 1765795315477579, "job": 8, "event": "compaction_finished", "compaction_time_micros": 101973, "compaction_time_cpu_micros": 36280, "output_level": 6, "num_output_files": 1, "total_output_size": 13927663, "num_input_records": 4549, "num_output_records": 4013, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795315480239, "job": 8, "event": "table_file_deletion", "file_number": 25}
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795315484326, "job": 8, "event": "table_file_deletion", "file_number": 23}
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:41:55.373445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:41:55.484516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:41:55.484522) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:41:55.484523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:41:55.484525) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 15 10:41:55 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:41:55.484526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 15 10:41:56 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v105: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 15 10:41:56 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:56 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:56 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:56 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:56 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:56.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:56 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:56 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e00043e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:56 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:56 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:56 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:56.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:57 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:41:57 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:57 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003e90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:57 compute-0 ceph-mon[74356]: pgmap v105: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [balancer INFO root] Optimize plan auto_2025-12-15_10:41:58
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [balancer INFO root] do_upmap
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [balancer INFO root] pools ['.rgw.root', 'images', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', '.mgr', 'vms', '.nfs', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups']
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v106: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:41:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 15 10:41:58 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 15 10:41:58 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:41:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:58 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40096e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:41:58 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:41:58 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:58 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:58 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:41:58.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:58 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:58 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:41:58 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:41:58 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:41:58.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:41:59 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:41:59 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e00043e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:41:59 compute-0 ceph-mon[74356]: pgmap v106: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:42:00 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v107: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:42:00 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:00 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003eb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:00 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:00 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:00 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:00.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:00 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:00 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40096e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:00 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:00 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:00 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:00.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:01 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:01 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:01 compute-0 ceph-mon[74356]: pgmap v107: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:42:02 compute-0 sudo[105680]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpnsigverwvwolsewqggksadesilixzy ; /usr/bin/python3'
Dec 15 10:42:02 compute-0 sudo[105680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:42:02 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v108: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:42:02 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:42:02 compute-0 python3[105682]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:42:02 compute-0 podman[105683]: 2025-12-15 10:42:02.239478539 +0000 UTC m=+0.045575122 container create 4faa7b3f227cecd81acd075189740142c33e9bf7e4fad7fe23f4ac60fdf4ba90 (image=quay.io/ceph/ceph:v19, name=dazzling_franklin, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:42:02 compute-0 systemd[1]: Started libpod-conmon-4faa7b3f227cecd81acd075189740142c33e9bf7e4fad7fe23f4ac60fdf4ba90.scope.
Dec 15 10:42:02 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:42:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25b34b5c636532730d004c524e0fc9f482f9bd9e9defff098ff53117a97deb6c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:42:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25b34b5c636532730d004c524e0fc9f482f9bd9e9defff098ff53117a97deb6c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:42:02 compute-0 podman[105683]: 2025-12-15 10:42:02.2205445 +0000 UTC m=+0.026641563 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:42:02 compute-0 podman[105683]: 2025-12-15 10:42:02.319607612 +0000 UTC m=+0.125704205 container init 4faa7b3f227cecd81acd075189740142c33e9bf7e4fad7fe23f4ac60fdf4ba90 (image=quay.io/ceph/ceph:v19, name=dazzling_franklin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 15 10:42:02 compute-0 podman[105683]: 2025-12-15 10:42:02.327499511 +0000 UTC m=+0.133596094 container start 4faa7b3f227cecd81acd075189740142c33e9bf7e4fad7fe23f4ac60fdf4ba90 (image=quay.io/ceph/ceph:v19, name=dazzling_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:42:02 compute-0 podman[105683]: 2025-12-15 10:42:02.33156389 +0000 UTC m=+0.137660493 container attach 4faa7b3f227cecd81acd075189740142c33e9bf7e4fad7fe23f4ac60fdf4ba90 (image=quay.io/ceph/ceph:v19, name=dazzling_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:42:02 compute-0 dazzling_franklin[105699]: ERROR: invalid flag --daemon-type
Dec 15 10:42:02 compute-0 systemd[1]: libpod-4faa7b3f227cecd81acd075189740142c33e9bf7e4fad7fe23f4ac60fdf4ba90.scope: Deactivated successfully.
Dec 15 10:42:02 compute-0 podman[105683]: 2025-12-15 10:42:02.378735612 +0000 UTC m=+0.184832195 container died 4faa7b3f227cecd81acd075189740142c33e9bf7e4fad7fe23f4ac60fdf4ba90 (image=quay.io/ceph/ceph:v19, name=dazzling_franklin, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 15 10:42:02 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:02 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e00043e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-25b34b5c636532730d004c524e0fc9f482f9bd9e9defff098ff53117a97deb6c-merged.mount: Deactivated successfully.
Dec 15 10:42:02 compute-0 podman[105683]: 2025-12-15 10:42:02.418676785 +0000 UTC m=+0.224773378 container remove 4faa7b3f227cecd81acd075189740142c33e9bf7e4fad7fe23f4ac60fdf4ba90 (image=quay.io/ceph/ceph:v19, name=dazzling_franklin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:42:02 compute-0 systemd[1]: libpod-conmon-4faa7b3f227cecd81acd075189740142c33e9bf7e4fad7fe23f4ac60fdf4ba90.scope: Deactivated successfully.
Dec 15 10:42:02 compute-0 sudo[105680]: pam_unix(sudo:session): session closed for user root
Dec 15 10:42:02 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:02 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:02 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:02.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:02 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:02 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003ed0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:02 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:02 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:42:02 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:02.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:42:02 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:42:02] "GET /metrics HTTP/1.1" 200 48216 "" "Prometheus/2.51.0"
Dec 15 10:42:02 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:42:02] "GET /metrics HTTP/1.1" 200 48216 "" "Prometheus/2.51.0"
Dec 15 10:42:03 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:03 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40096e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:03 compute-0 ceph-mon[74356]: pgmap v108: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:42:04 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v109: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:42:04 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:04 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:04 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:04 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:04 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:04.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:04 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:04 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e00043e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:04 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:04 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:04 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:04.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:04 compute-0 ceph-mon[74356]: pgmap v109: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:42:05 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:05 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003ef0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:06 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v110: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 15 10:42:06 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:06 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:06 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:06 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:06 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:06.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:06 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:06 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40096e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:06 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:06 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:06 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:06.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:07 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:42:07 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:07 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40096e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:07 compute-0 ceph-mon[74356]: pgmap v110: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 15 10:42:08 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v111: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:42:08 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:08 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003f10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:08 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:08 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:08 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:08.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:08 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:08 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:08 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:08 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:42:08 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:08.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:42:09 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:09 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40096e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:09 compute-0 ceph-mon[74356]: pgmap v111: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:42:10 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v112: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:42:10 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:10 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40096e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:10 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:10 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:42:10 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:10.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:42:10 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:10 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003f30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:10 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:10 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:10 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:10.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:11 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:11 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:11 compute-0 ceph-mon[74356]: pgmap v112: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:42:12 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v113: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:42:12 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:42:12 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:12 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40096e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:12 compute-0 sudo[105767]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaldjfyplwqisgdzidgxwnjgrspyheyd ; /usr/bin/python3'
Dec 15 10:42:12 compute-0 sudo[105767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:42:12 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:12 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:12 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:12.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:12 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:12 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40096e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:12 compute-0 python3[105769]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:42:12 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:12 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:42:12 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:12.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:42:12 compute-0 podman[105770]: 2025-12-15 10:42:12.732151186 +0000 UTC m=+0.062967273 container create 0912335e3090764366210cc89c59008341506ef774e61effeec7d2a8317ffb74 (image=quay.io/ceph/ceph:v19, name=festive_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 15 10:42:12 compute-0 ceph-mon[74356]: pgmap v113: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:42:12 compute-0 systemd[1]: Started libpod-conmon-0912335e3090764366210cc89c59008341506ef774e61effeec7d2a8317ffb74.scope.
Dec 15 10:42:12 compute-0 podman[105770]: 2025-12-15 10:42:12.696137587 +0000 UTC m=+0.026953694 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:42:12 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:42:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e831fb2a6fe552c4856e71e88c9e4ff8bf05b330321c079e300a6b7cc1c9c9d9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:42:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e831fb2a6fe552c4856e71e88c9e4ff8bf05b330321c079e300a6b7cc1c9c9d9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:42:12 compute-0 podman[105770]: 2025-12-15 10:42:12.82118086 +0000 UTC m=+0.151996967 container init 0912335e3090764366210cc89c59008341506ef774e61effeec7d2a8317ffb74 (image=quay.io/ceph/ceph:v19, name=festive_shamir, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 15 10:42:12 compute-0 podman[105770]: 2025-12-15 10:42:12.828723918 +0000 UTC m=+0.159540005 container start 0912335e3090764366210cc89c59008341506ef774e61effeec7d2a8317ffb74 (image=quay.io/ceph/ceph:v19, name=festive_shamir, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:42:12 compute-0 podman[105770]: 2025-12-15 10:42:12.833229851 +0000 UTC m=+0.164045938 container attach 0912335e3090764366210cc89c59008341506ef774e61effeec7d2a8317ffb74 (image=quay.io/ceph/ceph:v19, name=festive_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:42:12 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:42:12] "GET /metrics HTTP/1.1" 200 48216 "" "Prometheus/2.51.0"
Dec 15 10:42:12 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:42:12] "GET /metrics HTTP/1.1" 200 48216 "" "Prometheus/2.51.0"
Dec 15 10:42:12 compute-0 festive_shamir[105785]: ERROR: invalid flag --daemon-type
Dec 15 10:42:12 compute-0 systemd[1]: libpod-0912335e3090764366210cc89c59008341506ef774e61effeec7d2a8317ffb74.scope: Deactivated successfully.
Dec 15 10:42:12 compute-0 podman[105770]: 2025-12-15 10:42:12.881106374 +0000 UTC m=+0.211922461 container died 0912335e3090764366210cc89c59008341506ef774e61effeec7d2a8317ffb74 (image=quay.io/ceph/ceph:v19, name=festive_shamir, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:42:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-e831fb2a6fe552c4856e71e88c9e4ff8bf05b330321c079e300a6b7cc1c9c9d9-merged.mount: Deactivated successfully.
Dec 15 10:42:12 compute-0 podman[105770]: 2025-12-15 10:42:12.933162731 +0000 UTC m=+0.263978818 container remove 0912335e3090764366210cc89c59008341506ef774e61effeec7d2a8317ffb74 (image=quay.io/ceph/ceph:v19, name=festive_shamir, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 15 10:42:12 compute-0 sudo[105767]: pam_unix(sudo:session): session closed for user root
Dec 15 10:42:12 compute-0 systemd[1]: libpod-conmon-0912335e3090764366210cc89c59008341506ef774e61effeec7d2a8317ffb74.scope: Deactivated successfully.
Dec 15 10:42:13 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 15 10:42:13 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:42:13 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:13 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003f50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:13 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:42:14 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v114: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:42:14 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:14 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:14 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:14 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:14 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:14.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:14 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:14 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40096e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:14 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:14 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:42:14 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:14.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:42:14 compute-0 ceph-mon[74356]: pgmap v114: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:42:14 compute-0 sudo[105820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 15 10:42:14 compute-0 sudo[105820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:42:14 compute-0 sudo[105820]: pam_unix(sudo:session): session closed for user root
Dec 15 10:42:15 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:15 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:15 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-haproxy-nfs-cephfs-compute-0-ykblqa[95047]: [WARNING] 348/104215 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 15 10:42:16 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v115: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 15 10:42:16 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:16 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003f50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:16 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:16 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:16 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:16.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:16 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:16 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:16 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:16 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:16 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:16.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:17 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:42:17 compute-0 ceph-mon[74356]: pgmap v115: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 15 10:42:17 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:17 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40096e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:18 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v116: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 15 10:42:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:18 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:18 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:18 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:18 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:18.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:18 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003f70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:18 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:18 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:18 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:18.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:19 compute-0 ceph-mon[74356]: pgmap v116: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 15 10:42:19 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:19 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:20 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v117: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 15 10:42:20 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:20 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40096e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:20 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:20 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:20 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:20.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:20 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:20 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8001a90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:20 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:20 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:20 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:20.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:21 compute-0 ceph-mon[74356]: pgmap v117: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 15 10:42:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:21 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003f90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:22 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:42:22 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v118: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 15 10:42:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:22 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:22 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:22 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:22 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:22.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:22 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f40096e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:22 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:22 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:22 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:22.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:42:22] "GET /metrics HTTP/1.1" 200 48226 "" "Prometheus/2.51.0"
Dec 15 10:42:22 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:42:22] "GET /metrics HTTP/1.1" 200 48226 "" "Prometheus/2.51.0"
Dec 15 10:42:23 compute-0 sudo[105876]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgwtoooznrjhyfclymyzfzhypnrurfve ; /usr/bin/python3'
Dec 15 10:42:23 compute-0 sudo[105876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:42:23 compute-0 python3[105878]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:42:23 compute-0 ceph-mon[74356]: pgmap v118: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 15 10:42:23 compute-0 podman[105879]: 2025-12-15 10:42:23.248181278 +0000 UTC m=+0.049809415 container create 1b5e6e3e991cc7f7034d98b07bfa54ae59a5f5ffa2ac89c2b03d5a1f7bcb008e (image=quay.io/ceph/ceph:v19, name=zen_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:42:23 compute-0 systemd[1]: Started libpod-conmon-1b5e6e3e991cc7f7034d98b07bfa54ae59a5f5ffa2ac89c2b03d5a1f7bcb008e.scope.
Dec 15 10:42:23 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:42:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f04f73c53c9f6081aa5e93d1badf090f4dca8c0b9298150dd357447408f7e638/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:42:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f04f73c53c9f6081aa5e93d1badf090f4dca8c0b9298150dd357447408f7e638/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:42:23 compute-0 podman[105879]: 2025-12-15 10:42:23.311077377 +0000 UTC m=+0.112705514 container init 1b5e6e3e991cc7f7034d98b07bfa54ae59a5f5ffa2ac89c2b03d5a1f7bcb008e (image=quay.io/ceph/ceph:v19, name=zen_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:42:23 compute-0 podman[105879]: 2025-12-15 10:42:23.317424988 +0000 UTC m=+0.119053125 container start 1b5e6e3e991cc7f7034d98b07bfa54ae59a5f5ffa2ac89c2b03d5a1f7bcb008e (image=quay.io/ceph/ceph:v19, name=zen_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 15 10:42:23 compute-0 podman[105879]: 2025-12-15 10:42:23.320615318 +0000 UTC m=+0.122243455 container attach 1b5e6e3e991cc7f7034d98b07bfa54ae59a5f5ffa2ac89c2b03d5a1f7bcb008e (image=quay.io/ceph/ceph:v19, name=zen_mestorf, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:42:23 compute-0 podman[105879]: 2025-12-15 10:42:23.230674466 +0000 UTC m=+0.032302603 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:42:23 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:23 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8002690 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:23 compute-0 zen_mestorf[105895]: ERROR: invalid flag --daemon-type
Dec 15 10:42:23 compute-0 systemd[1]: libpod-1b5e6e3e991cc7f7034d98b07bfa54ae59a5f5ffa2ac89c2b03d5a1f7bcb008e.scope: Deactivated successfully.
Dec 15 10:42:23 compute-0 podman[105879]: 2025-12-15 10:42:23.384441297 +0000 UTC m=+0.186069444 container died 1b5e6e3e991cc7f7034d98b07bfa54ae59a5f5ffa2ac89c2b03d5a1f7bcb008e (image=quay.io/ceph/ceph:v19, name=zen_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 15 10:42:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-f04f73c53c9f6081aa5e93d1badf090f4dca8c0b9298150dd357447408f7e638-merged.mount: Deactivated successfully.
Dec 15 10:42:23 compute-0 podman[105879]: 2025-12-15 10:42:23.417494792 +0000 UTC m=+0.219122929 container remove 1b5e6e3e991cc7f7034d98b07bfa54ae59a5f5ffa2ac89c2b03d5a1f7bcb008e (image=quay.io/ceph/ceph:v19, name=zen_mestorf, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 15 10:42:23 compute-0 systemd[1]: libpod-conmon-1b5e6e3e991cc7f7034d98b07bfa54ae59a5f5ffa2ac89c2b03d5a1f7bcb008e.scope: Deactivated successfully.
Dec 15 10:42:23 compute-0 sudo[105876]: pam_unix(sudo:session): session closed for user root
Dec 15 10:42:24 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v119: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 15 10:42:24 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:24 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003fb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:24 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:24 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:24 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:24.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:24 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:24 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 15 10:42:24 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:24 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:24 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:24 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:24 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:24.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:25 compute-0 ceph-mon[74356]: pgmap v119: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 15 10:42:25 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:25 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:26 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v120: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Dec 15 10:42:26 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:26 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c80028c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:26 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:26 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:42:26 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:26.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:42:26 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:26 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:26 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:26 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:42:26 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:26.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:42:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:42:27 compute-0 ceph-mon[74356]: pgmap v120: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Dec 15 10:42:27 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:27 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:27 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:27 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 15 10:42:27 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:27 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 15 10:42:28 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v121: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Dec 15 10:42:28 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 15 10:42:28 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:42:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:42:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:42:28 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:42:28 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:28 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:42:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:42:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:42:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:42:28 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:28 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:42:28 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:28.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:42:28 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:28 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c80028c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:28 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:28 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:28 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:28.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:29 compute-0 ceph-mon[74356]: pgmap v121: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Dec 15 10:42:29 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:29 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c80028c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:30 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v122: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 15 10:42:30 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:30 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:30 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:30 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:30 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:30.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:30 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:30 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:30 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:30 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 15 10:42:30 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:30 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:30 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:30.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:31 compute-0 ceph-mon[74356]: pgmap v122: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 15 10:42:31 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:31 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c80028c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:32 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:42:32 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v123: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 15 10:42:32 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:32 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c80028c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:32 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:32 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:32 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:32.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:32 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:32 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:32 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:32 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:42:32 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:32.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:42:32 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:42:32] "GET /metrics HTTP/1.1" 200 48233 "" "Prometheus/2.51.0"
Dec 15 10:42:32 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:42:32] "GET /metrics HTTP/1.1" 200 48233 "" "Prometheus/2.51.0"
Dec 15 10:42:33 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:33 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:33 compute-0 ceph-mon[74356]: pgmap v123: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 15 10:42:33 compute-0 sudo[105961]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwgtsvkgumcvpptwjjrletgkuhuuullt ; /usr/bin/python3'
Dec 15 10:42:33 compute-0 sudo[105961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:42:33 compute-0 python3[105963]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:42:33 compute-0 podman[105964]: 2025-12-15 10:42:33.783609936 +0000 UTC m=+0.059672498 container create 1677c1e6afbbab018924eaa37ea1563fff1eee7f38c2c12c8114042be3977382 (image=quay.io/ceph/ceph:v19, name=dreamy_zhukovsky, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:42:33 compute-0 systemd[1]: Started libpod-conmon-1677c1e6afbbab018924eaa37ea1563fff1eee7f38c2c12c8114042be3977382.scope.
Dec 15 10:42:33 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:42:33 compute-0 podman[105964]: 2025-12-15 10:42:33.755669792 +0000 UTC m=+0.031732414 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:42:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57a4a50ff67cca475b949712dc820320992be27927ecaad022da482ac56c2594/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:42:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57a4a50ff67cca475b949712dc820320992be27927ecaad022da482ac56c2594/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:42:33 compute-0 podman[105964]: 2025-12-15 10:42:33.862561742 +0000 UTC m=+0.138624324 container init 1677c1e6afbbab018924eaa37ea1563fff1eee7f38c2c12c8114042be3977382 (image=quay.io/ceph/ceph:v19, name=dreamy_zhukovsky, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:42:33 compute-0 podman[105964]: 2025-12-15 10:42:33.871857756 +0000 UTC m=+0.147920318 container start 1677c1e6afbbab018924eaa37ea1563fff1eee7f38c2c12c8114042be3977382 (image=quay.io/ceph/ceph:v19, name=dreamy_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:42:33 compute-0 podman[105964]: 2025-12-15 10:42:33.875587663 +0000 UTC m=+0.151650195 container attach 1677c1e6afbbab018924eaa37ea1563fff1eee7f38c2c12c8114042be3977382 (image=quay.io/ceph/ceph:v19, name=dreamy_zhukovsky, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:42:33 compute-0 dreamy_zhukovsky[105979]: ERROR: invalid flag --daemon-type
Dec 15 10:42:33 compute-0 systemd[1]: libpod-1677c1e6afbbab018924eaa37ea1563fff1eee7f38c2c12c8114042be3977382.scope: Deactivated successfully.
Dec 15 10:42:33 compute-0 conmon[105979]: conmon 1677c1e6afbbab018924 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1677c1e6afbbab018924eaa37ea1563fff1eee7f38c2c12c8114042be3977382.scope/container/memory.events
Dec 15 10:42:33 compute-0 podman[105964]: 2025-12-15 10:42:33.920510054 +0000 UTC m=+0.196572596 container died 1677c1e6afbbab018924eaa37ea1563fff1eee7f38c2c12c8114042be3977382 (image=quay.io/ceph/ceph:v19, name=dreamy_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:42:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-57a4a50ff67cca475b949712dc820320992be27927ecaad022da482ac56c2594-merged.mount: Deactivated successfully.
Dec 15 10:42:33 compute-0 podman[105964]: 2025-12-15 10:42:33.963140262 +0000 UTC m=+0.239202794 container remove 1677c1e6afbbab018924eaa37ea1563fff1eee7f38c2c12c8114042be3977382 (image=quay.io/ceph/ceph:v19, name=dreamy_zhukovsky, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 15 10:42:33 compute-0 systemd[1]: libpod-conmon-1677c1e6afbbab018924eaa37ea1563fff1eee7f38c2c12c8114042be3977382.scope: Deactivated successfully.
Dec 15 10:42:33 compute-0 sudo[105961]: pam_unix(sudo:session): session closed for user root
Dec 15 10:42:34 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v124: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 15 10:42:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:34 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c80028c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:34 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:34 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:34 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:34.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:34 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:34 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:34 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:34 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:34.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:34 compute-0 sudo[106013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 15 10:42:34 compute-0 sudo[106013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:42:34 compute-0 sudo[106013]: pam_unix(sudo:session): session closed for user root
Dec 15 10:42:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:35 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:35 compute-0 ceph-mon[74356]: pgmap v124: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 15 10:42:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-haproxy-nfs-cephfs-compute-0-ykblqa[95047]: [WARNING] 348/104235 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 15 10:42:36 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v125: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 15 10:42:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:36 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:36 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:36 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:42:36 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:36.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:42:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:36 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c80028c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:36 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:36 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:36 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:36.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:37 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:42:37 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:37 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:37 compute-0 ceph-mon[74356]: pgmap v125: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 15 10:42:38 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v126: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Dec 15 10:42:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:38 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:38 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:38 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:38 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:38.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:38 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:38 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:38 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:38 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:38.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:39 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c80048e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:39 compute-0 ceph-mon[74356]: pgmap v126: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Dec 15 10:42:40 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v127: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Dec 15 10:42:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:40 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4004090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:40 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:40 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:40 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:40.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:40 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:40 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:40 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:40 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:40.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:41 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:41 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:41 compute-0 ceph-mon[74356]: pgmap v127: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Dec 15 10:42:42 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:42:42 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v128: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec 15 10:42:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:42 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c80048e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:42 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:42 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:42:42 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:42.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:42:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:42 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c40040b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:42 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:42 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:42:42 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:42.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:42:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:42:42] "GET /metrics HTTP/1.1" 200 48233 "" "Prometheus/2.51.0"
Dec 15 10:42:42 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:42:42] "GET /metrics HTTP/1.1" 200 48233 "" "Prometheus/2.51.0"
Dec 15 10:42:43 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 15 10:42:43 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:42:43 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:43 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:43 compute-0 ceph-mon[74356]: pgmap v128: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec 15 10:42:43 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:42:44 compute-0 sudo[106070]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emxwyznsrizamyqomcnjwfnypahccsaz ; /usr/bin/python3'
Dec 15 10:42:44 compute-0 sudo[106070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:42:44 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v129: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec 15 10:42:44 compute-0 python3[106072]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:42:44 compute-0 podman[106073]: 2025-12-15 10:42:44.273107818 +0000 UTC m=+0.045018124 container create fe08114a9364b0aca5a73a7053012aa8ca3a9bd2baf07db6a52eb0c0e731f2e3 (image=quay.io/ceph/ceph:v19, name=nice_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Dec 15 10:42:44 compute-0 systemd[1]: Started libpod-conmon-fe08114a9364b0aca5a73a7053012aa8ca3a9bd2baf07db6a52eb0c0e731f2e3.scope.
Dec 15 10:42:44 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:42:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a18f35eb2815a72765a9dbf9ab95a4ac282bb3e9776a88f122fd072ccbc04cc6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:42:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a18f35eb2815a72765a9dbf9ab95a4ac282bb3e9776a88f122fd072ccbc04cc6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:42:44 compute-0 podman[106073]: 2025-12-15 10:42:44.250378399 +0000 UTC m=+0.022288725 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:42:44 compute-0 podman[106073]: 2025-12-15 10:42:44.346386527 +0000 UTC m=+0.118296863 container init fe08114a9364b0aca5a73a7053012aa8ca3a9bd2baf07db6a52eb0c0e731f2e3 (image=quay.io/ceph/ceph:v19, name=nice_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:42:44 compute-0 podman[106073]: 2025-12-15 10:42:44.353959669 +0000 UTC m=+0.125870015 container start fe08114a9364b0aca5a73a7053012aa8ca3a9bd2baf07db6a52eb0c0e731f2e3 (image=quay.io/ceph/ceph:v19, name=nice_lichterman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 15 10:42:44 compute-0 podman[106073]: 2025-12-15 10:42:44.357475192 +0000 UTC m=+0.129385568 container attach fe08114a9364b0aca5a73a7053012aa8ca3a9bd2baf07db6a52eb0c0e731f2e3 (image=quay.io/ceph/ceph:v19, name=nice_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 15 10:42:44 compute-0 nice_lichterman[106088]: ERROR: invalid flag --daemon-type
Dec 15 10:42:44 compute-0 systemd[1]: libpod-fe08114a9364b0aca5a73a7053012aa8ca3a9bd2baf07db6a52eb0c0e731f2e3.scope: Deactivated successfully.
Dec 15 10:42:44 compute-0 podman[106073]: 2025-12-15 10:42:44.41638438 +0000 UTC m=+0.188294726 container died fe08114a9364b0aca5a73a7053012aa8ca3a9bd2baf07db6a52eb0c0e731f2e3 (image=quay.io/ceph/ceph:v19, name=nice_lichterman, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:42:44 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:44 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-a18f35eb2815a72765a9dbf9ab95a4ac282bb3e9776a88f122fd072ccbc04cc6-merged.mount: Deactivated successfully.
Dec 15 10:42:44 compute-0 podman[106073]: 2025-12-15 10:42:44.465256147 +0000 UTC m=+0.237166453 container remove fe08114a9364b0aca5a73a7053012aa8ca3a9bd2baf07db6a52eb0c0e731f2e3 (image=quay.io/ceph/ceph:v19, name=nice_lichterman, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:42:44 compute-0 systemd[1]: libpod-conmon-fe08114a9364b0aca5a73a7053012aa8ca3a9bd2baf07db6a52eb0c0e731f2e3.scope: Deactivated successfully.
Dec 15 10:42:44 compute-0 sudo[106070]: pam_unix(sudo:session): session closed for user root
Dec 15 10:42:44 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:44 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:44 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:44.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:44 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:44 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c80048e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:44 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:44 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:42:44 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:44.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:42:45 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:45 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c40040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:45 compute-0 ceph-mon[74356]: pgmap v129: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec 15 10:42:46 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v130: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 15 10:42:46 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:46 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:46 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:46 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:46 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:46.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:46 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:46 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc00be60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:46 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:46 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:42:46 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:46.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:42:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:42:47 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:47 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0002920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:47 compute-0 ceph-mon[74356]: pgmap v130: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 15 10:42:48 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v131: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:42:48 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:48 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:48 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:48 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:42:48 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:48.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:42:48 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:48 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:48 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:48 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:48 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:48.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:48 compute-0 ceph-mon[74356]: pgmap v131: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:42:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc00be60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:49 compute-0 sudo[106129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:42:49 compute-0 sudo[106129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:42:49 compute-0 sudo[106129]: pam_unix(sudo:session): session closed for user root
Dec 15 10:42:49 compute-0 sudo[106154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 15 10:42:49 compute-0 sudo[106154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:42:50 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v132: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:42:50 compute-0 podman[106254]: 2025-12-15 10:42:50.213573777 +0000 UTC m=+0.066858694 container exec 79ed8dc51b1852fd831764c2817cfa3745ae4937bc9015bdb965e376ab3cc58d (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 15 10:42:50 compute-0 podman[106254]: 2025-12-15 10:42:50.3035342 +0000 UTC m=+0.156819117 container exec_died 79ed8dc51b1852fd831764c2817cfa3745ae4937bc9015bdb965e376ab3cc58d (image=quay.io/ceph/ceph:v19, name=ceph-77365f67-614e-5a8d-b658-640395550c79-mon-compute-0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:42:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:50 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0002920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:50 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:50 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:42:50 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:50.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:42:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:50 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:50 compute-0 podman[106370]: 2025-12-15 10:42:50.735423993 +0000 UTC m=+0.063464815 container exec 4840dadc82f34f3188660955046170e50347603e400e5beeba87756514ef7863 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:42:50 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:50 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:50 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:50.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:50 compute-0 podman[106370]: 2025-12-15 10:42:50.773533554 +0000 UTC m=+0.101574376 container exec_died 4840dadc82f34f3188660955046170e50347603e400e5beeba87756514ef7863 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:42:51 compute-0 podman[106461]: 2025-12-15 10:42:51.201750449 +0000 UTC m=+0.102578488 container exec c0f38bc539f687c54b7eb01f058f3691b3a24f961ff23e54889334c42825f1f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:42:51 compute-0 ceph-mon[74356]: pgmap v132: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:42:51 compute-0 podman[106461]: 2025-12-15 10:42:51.214744876 +0000 UTC m=+0.115572885 container exec_died c0f38bc539f687c54b7eb01f058f3691b3a24f961ff23e54889334c42825f1f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 15 10:42:51 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:51 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:51 compute-0 podman[106524]: 2025-12-15 10:42:51.449789619 +0000 UTC m=+0.060004024 container exec 55276bcc9605a59cf148bdf11fc4eb753ca401193f2349bda31c6714ea73d19e (image=quay.io/ceph/haproxy:2.3, name=ceph-77365f67-614e-5a8d-b658-640395550c79-haproxy-nfs-cephfs-compute-0-ykblqa)
Dec 15 10:42:51 compute-0 podman[106524]: 2025-12-15 10:42:51.486769065 +0000 UTC m=+0.096983490 container exec_died 55276bcc9605a59cf148bdf11fc4eb753ca401193f2349bda31c6714ea73d19e (image=quay.io/ceph/haproxy:2.3, name=ceph-77365f67-614e-5a8d-b658-640395550c79-haproxy-nfs-cephfs-compute-0-ykblqa)
Dec 15 10:42:51 compute-0 podman[106589]: 2025-12-15 10:42:51.716503558 +0000 UTC m=+0.059570791 container exec eb383ee2660aa4da80291a00f1592a5a221d1461ab8a14db8d6bf5db83163774 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-nfs-cephfs-compute-0-gdchmd, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, description=keepalived for Ceph, release=1793)
Dec 15 10:42:51 compute-0 podman[106589]: 2025-12-15 10:42:51.730781485 +0000 UTC m=+0.073848658 container exec_died eb383ee2660aa4da80291a00f1592a5a221d1461ab8a14db8d6bf5db83163774 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-77365f67-614e-5a8d-b658-640395550c79-keepalived-nfs-cephfs-compute-0-gdchmd, vcs-type=git, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, vendor=Red Hat, Inc., version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, io.openshift.expose-services=, release=1793, com.redhat.component=keepalived-container, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived)
Dec 15 10:42:52 compute-0 podman[106652]: 2025-12-15 10:42:52.006161781 +0000 UTC m=+0.077813005 container exec 89a20608330788e4c3363bfd799e4feb78e7548f505117a7fc750d8eecfef10f (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:42:52 compute-0 podman[106652]: 2025-12-15 10:42:52.049701937 +0000 UTC m=+0.121353101 container exec_died 89a20608330788e4c3363bfd799e4feb78e7548f505117a7fc750d8eecfef10f (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:42:52 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:42:52 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v133: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:42:52 compute-0 podman[106726]: 2025-12-15 10:42:52.285275336 +0000 UTC m=+0.052215244 container exec 904de0ff1c362d9a2eedc2ea5dd43957c068393257172a115a9781fe3109347c (image=quay.io/ceph/grafana:10.4.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:42:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:52 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:52 compute-0 podman[106726]: 2025-12-15 10:42:52.442295859 +0000 UTC m=+0.209235727 container exec_died 904de0ff1c362d9a2eedc2ea5dd43957c068393257172a115a9781fe3109347c (image=quay.io/ceph/grafana:10.4.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 15 10:42:52 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:52 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:42:52 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:52.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:42:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:52 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0002250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:52 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:52 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:52 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:52.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:52 compute-0 podman[106838]: 2025-12-15 10:42:52.798152164 +0000 UTC m=+0.056732939 container exec 811bf452ef3ce2cd1829f815f500a947c1c9dba459a203c1e31a5cfea42f0cfb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:42:52 compute-0 podman[106838]: 2025-12-15 10:42:52.839000144 +0000 UTC m=+0.097580899 container exec_died 811bf452ef3ce2cd1829f815f500a947c1c9dba459a203c1e31a5cfea42f0cfb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-77365f67-614e-5a8d-b658-640395550c79-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 15 10:42:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:42:52] "GET /metrics HTTP/1.1" 200 48229 "" "Prometheus/2.51.0"
Dec 15 10:42:52 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:42:52] "GET /metrics HTTP/1.1" 200 48229 "" "Prometheus/2.51.0"
Dec 15 10:42:52 compute-0 sudo[106154]: pam_unix(sudo:session): session closed for user root
Dec 15 10:42:52 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:42:52 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:42:52 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:42:52 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:42:53 compute-0 sudo[106879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:42:53 compute-0 sudo[106879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:42:53 compute-0 sudo[106879]: pam_unix(sudo:session): session closed for user root
Dec 15 10:42:53 compute-0 sudo[106904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 15 10:42:53 compute-0 sudo[106904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:42:53 compute-0 ceph-mon[74356]: pgmap v133: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:42:53 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:42:53 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:42:53 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:53 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c40041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:53 compute-0 sudo[106904]: pam_unix(sudo:session): session closed for user root
Dec 15 10:42:53 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:42:53 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:42:53 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 15 10:42:53 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 15 10:42:53 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 15 10:42:53 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:42:53 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 15 10:42:53 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:42:53 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 15 10:42:53 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 15 10:42:53 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 15 10:42:53 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 15 10:42:53 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:42:53 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:42:53 compute-0 sudo[106962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:42:53 compute-0 sudo[106962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:42:53 compute-0 sudo[106962]: pam_unix(sudo:session): session closed for user root
Dec 15 10:42:53 compute-0 sudo[106987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 77365f67-614e-5a8d-b658-640395550c79 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 15 10:42:53 compute-0 sudo[106987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:42:54 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v134: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:42:54 compute-0 podman[107053]: 2025-12-15 10:42:54.151595584 +0000 UTC m=+0.046533633 container create d5cb8a8e058c670788ef7ac41134e34ab16c3ea3390cc0ea303b193d622b1a52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 15 10:42:54 compute-0 systemd[1]: Started libpod-conmon-d5cb8a8e058c670788ef7ac41134e34ab16c3ea3390cc0ea303b193d622b1a52.scope.
Dec 15 10:42:54 compute-0 podman[107053]: 2025-12-15 10:42:54.12900004 +0000 UTC m=+0.023938139 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:42:54 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:42:54 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:42:54 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 15 10:42:54 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:42:54 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:42:54 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 15 10:42:54 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 15 10:42:54 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:42:54 compute-0 podman[107053]: 2025-12-15 10:42:54.246694382 +0000 UTC m=+0.141632451 container init d5cb8a8e058c670788ef7ac41134e34ab16c3ea3390cc0ea303b193d622b1a52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:42:54 compute-0 podman[107053]: 2025-12-15 10:42:54.258390487 +0000 UTC m=+0.153328546 container start d5cb8a8e058c670788ef7ac41134e34ab16c3ea3390cc0ea303b193d622b1a52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_feistel, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 15 10:42:54 compute-0 podman[107053]: 2025-12-15 10:42:54.262108346 +0000 UTC m=+0.157046415 container attach d5cb8a8e058c670788ef7ac41134e34ab16c3ea3390cc0ea303b193d622b1a52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_feistel, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 15 10:42:54 compute-0 cranky_feistel[107069]: 167 167
Dec 15 10:42:54 compute-0 systemd[1]: libpod-d5cb8a8e058c670788ef7ac41134e34ab16c3ea3390cc0ea303b193d622b1a52.scope: Deactivated successfully.
Dec 15 10:42:54 compute-0 podman[107053]: 2025-12-15 10:42:54.264155532 +0000 UTC m=+0.159093581 container died d5cb8a8e058c670788ef7ac41134e34ab16c3ea3390cc0ea303b193d622b1a52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 15 10:42:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e8d2ed0727003d25a73e891ce4df9752ee90ba0987fd3100312bc3b8afb9d7f-merged.mount: Deactivated successfully.
Dec 15 10:42:54 compute-0 podman[107053]: 2025-12-15 10:42:54.301653763 +0000 UTC m=+0.196591802 container remove d5cb8a8e058c670788ef7ac41134e34ab16c3ea3390cc0ea303b193d622b1a52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:42:54 compute-0 systemd[1]: libpod-conmon-d5cb8a8e058c670788ef7ac41134e34ab16c3ea3390cc0ea303b193d622b1a52.scope: Deactivated successfully.
Dec 15 10:42:54 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:54 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:54 compute-0 podman[107092]: 2025-12-15 10:42:54.438657694 +0000 UTC m=+0.040945903 container create 9e568e9f6d932448422b7846eda0abf0855187e3bf24e1126b6f67acf66a6d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 15 10:42:54 compute-0 systemd[1]: Started libpod-conmon-9e568e9f6d932448422b7846eda0abf0855187e3bf24e1126b6f67acf66a6d7d.scope.
Dec 15 10:42:54 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:42:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/587b57e5b1183458157cad189bf4a1e15a74c0d8acd0e1d8c89d87935281bb79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:42:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/587b57e5b1183458157cad189bf4a1e15a74c0d8acd0e1d8c89d87935281bb79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:42:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/587b57e5b1183458157cad189bf4a1e15a74c0d8acd0e1d8c89d87935281bb79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:42:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/587b57e5b1183458157cad189bf4a1e15a74c0d8acd0e1d8c89d87935281bb79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:42:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/587b57e5b1183458157cad189bf4a1e15a74c0d8acd0e1d8c89d87935281bb79/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:42:54 compute-0 podman[107092]: 2025-12-15 10:42:54.420187443 +0000 UTC m=+0.022475672 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:42:54 compute-0 podman[107092]: 2025-12-15 10:42:54.51903041 +0000 UTC m=+0.121318639 container init 9e568e9f6d932448422b7846eda0abf0855187e3bf24e1126b6f67acf66a6d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:42:54 compute-0 podman[107092]: 2025-12-15 10:42:54.526450408 +0000 UTC m=+0.128738617 container start 9e568e9f6d932448422b7846eda0abf0855187e3bf24e1126b6f67acf66a6d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_einstein, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Dec 15 10:42:54 compute-0 podman[107092]: 2025-12-15 10:42:54.529753224 +0000 UTC m=+0.132041453 container attach 9e568e9f6d932448422b7846eda0abf0855187e3bf24e1126b6f67acf66a6d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_einstein, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:42:54 compute-0 sudo[107137]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klwpxbhofupendouwnibzcbldbfduhli ; /usr/bin/python3'
Dec 15 10:42:54 compute-0 sudo[107137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:42:54 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:54 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:42:54 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:54.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:42:54 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:54 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc00be60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:54 compute-0 python3[107139]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:42:54 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:54 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:42:54 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:54.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:42:54 compute-0 podman[107143]: 2025-12-15 10:42:54.790961396 +0000 UTC m=+0.042399860 container create c2cd1e75c46490e9ef5bfdb443f52b545fd78cfc9b663af4f57c1d50d61bdb41 (image=quay.io/ceph/ceph:v19, name=interesting_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Dec 15 10:42:54 compute-0 systemd[1]: Started libpod-conmon-c2cd1e75c46490e9ef5bfdb443f52b545fd78cfc9b663af4f57c1d50d61bdb41.scope.
Dec 15 10:42:54 compute-0 inspiring_einstein[107108]: --> passed data devices: 0 physical, 1 LVM
Dec 15 10:42:54 compute-0 inspiring_einstein[107108]: --> All data devices are unavailable
Dec 15 10:42:54 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:42:54 compute-0 podman[107143]: 2025-12-15 10:42:54.772444493 +0000 UTC m=+0.023882987 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:42:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60c0b852bacce0601735ed23e3b13174818ee2fd17d241a29415a9d87aae635e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:42:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60c0b852bacce0601735ed23e3b13174818ee2fd17d241a29415a9d87aae635e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:42:54 compute-0 podman[107143]: 2025-12-15 10:42:54.884906537 +0000 UTC m=+0.136345041 container init c2cd1e75c46490e9ef5bfdb443f52b545fd78cfc9b663af4f57c1d50d61bdb41 (image=quay.io/ceph/ceph:v19, name=interesting_lederberg, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 15 10:42:54 compute-0 podman[107143]: 2025-12-15 10:42:54.89747773 +0000 UTC m=+0.148916204 container start c2cd1e75c46490e9ef5bfdb443f52b545fd78cfc9b663af4f57c1d50d61bdb41 (image=quay.io/ceph/ceph:v19, name=interesting_lederberg, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 15 10:42:54 compute-0 podman[107143]: 2025-12-15 10:42:54.901178139 +0000 UTC m=+0.152616623 container attach c2cd1e75c46490e9ef5bfdb443f52b545fd78cfc9b663af4f57c1d50d61bdb41 (image=quay.io/ceph/ceph:v19, name=interesting_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:42:54 compute-0 systemd[1]: libpod-9e568e9f6d932448422b7846eda0abf0855187e3bf24e1126b6f67acf66a6d7d.scope: Deactivated successfully.
Dec 15 10:42:54 compute-0 podman[107169]: 2025-12-15 10:42:54.957646459 +0000 UTC m=+0.025692745 container died 9e568e9f6d932448422b7846eda0abf0855187e3bf24e1126b6f67acf66a6d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:42:54 compute-0 interesting_lederberg[107163]: ERROR: invalid flag --daemon-type
Dec 15 10:42:54 compute-0 systemd[1]: libpod-c2cd1e75c46490e9ef5bfdb443f52b545fd78cfc9b663af4f57c1d50d61bdb41.scope: Deactivated successfully.
Dec 15 10:42:54 compute-0 conmon[107163]: conmon c2cd1e75c46490e9ef5b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c2cd1e75c46490e9ef5bfdb443f52b545fd78cfc9b663af4f57c1d50d61bdb41.scope/container/memory.events
Dec 15 10:42:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-587b57e5b1183458157cad189bf4a1e15a74c0d8acd0e1d8c89d87935281bb79-merged.mount: Deactivated successfully.
Dec 15 10:42:54 compute-0 podman[107169]: 2025-12-15 10:42:54.99950806 +0000 UTC m=+0.067554326 container remove 9e568e9f6d932448422b7846eda0abf0855187e3bf24e1126b6f67acf66a6d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 15 10:42:55 compute-0 systemd[1]: libpod-conmon-9e568e9f6d932448422b7846eda0abf0855187e3bf24e1126b6f67acf66a6d7d.scope: Deactivated successfully.
Dec 15 10:42:55 compute-0 podman[107199]: 2025-12-15 10:42:55.01914767 +0000 UTC m=+0.023772564 container died c2cd1e75c46490e9ef5bfdb443f52b545fd78cfc9b663af4f57c1d50d61bdb41 (image=quay.io/ceph/ceph:v19, name=interesting_lederberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:42:55 compute-0 sudo[106987]: pam_unix(sudo:session): session closed for user root
Dec 15 10:42:55 compute-0 podman[107199]: 2025-12-15 10:42:55.062633123 +0000 UTC m=+0.067257967 container remove c2cd1e75c46490e9ef5bfdb443f52b545fd78cfc9b663af4f57c1d50d61bdb41 (image=quay.io/ceph/ceph:v19, name=interesting_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 15 10:42:55 compute-0 systemd[1]: libpod-conmon-c2cd1e75c46490e9ef5bfdb443f52b545fd78cfc9b663af4f57c1d50d61bdb41.scope: Deactivated successfully.
Dec 15 10:42:55 compute-0 sudo[107209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 15 10:42:55 compute-0 sudo[107209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:42:55 compute-0 sudo[107209]: pam_unix(sudo:session): session closed for user root
Dec 15 10:42:55 compute-0 sudo[107137]: pam_unix(sudo:session): session closed for user root
Dec 15 10:42:55 compute-0 sudo[107237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:42:55 compute-0 sudo[107237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:42:55 compute-0 sudo[107237]: pam_unix(sudo:session): session closed for user root
Dec 15 10:42:55 compute-0 sudo[107263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 77365f67-614e-5a8d-b658-640395550c79 -- lvm list --format json
Dec 15 10:42:55 compute-0 sudo[107263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:42:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-60c0b852bacce0601735ed23e3b13174818ee2fd17d241a29415a9d87aae635e-merged.mount: Deactivated successfully.
Dec 15 10:42:55 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:55 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0002250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:55 compute-0 podman[107329]: 2025-12-15 10:42:55.566335027 +0000 UTC m=+0.042873934 container create 3865614f9ecaa1a2f1f3e778d98d362a744f4b4c39cda4da10dd3b8ae5153e8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_golick, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:42:55 compute-0 systemd[1]: Started libpod-conmon-3865614f9ecaa1a2f1f3e778d98d362a744f4b4c39cda4da10dd3b8ae5153e8c.scope.
Dec 15 10:42:55 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:42:55 compute-0 podman[107329]: 2025-12-15 10:42:55.54708584 +0000 UTC m=+0.023624807 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:42:55 compute-0 podman[107329]: 2025-12-15 10:42:55.642948233 +0000 UTC m=+0.119487160 container init 3865614f9ecaa1a2f1f3e778d98d362a744f4b4c39cda4da10dd3b8ae5153e8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:42:55 compute-0 podman[107329]: 2025-12-15 10:42:55.648273274 +0000 UTC m=+0.124812191 container start 3865614f9ecaa1a2f1f3e778d98d362a744f4b4c39cda4da10dd3b8ae5153e8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:42:55 compute-0 podman[107329]: 2025-12-15 10:42:55.652000243 +0000 UTC m=+0.128539160 container attach 3865614f9ecaa1a2f1f3e778d98d362a744f4b4c39cda4da10dd3b8ae5153e8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_golick, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 15 10:42:55 compute-0 determined_golick[107346]: 167 167
Dec 15 10:42:55 compute-0 systemd[1]: libpod-3865614f9ecaa1a2f1f3e778d98d362a744f4b4c39cda4da10dd3b8ae5153e8c.scope: Deactivated successfully.
Dec 15 10:42:55 compute-0 conmon[107346]: conmon 3865614f9ecaa1a2f1f3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3865614f9ecaa1a2f1f3e778d98d362a744f4b4c39cda4da10dd3b8ae5153e8c.scope/container/memory.events
Dec 15 10:42:55 compute-0 podman[107329]: 2025-12-15 10:42:55.654007997 +0000 UTC m=+0.130546914 container died 3865614f9ecaa1a2f1f3e778d98d362a744f4b4c39cda4da10dd3b8ae5153e8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_golick, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 15 10:42:55 compute-0 ceph-mon[74356]: pgmap v134: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:42:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bf85abbf58e7dc02345d6f3cd79a5314b8c8651a09674e4319562e84761e7da-merged.mount: Deactivated successfully.
Dec 15 10:42:55 compute-0 podman[107329]: 2025-12-15 10:42:55.689510365 +0000 UTC m=+0.166049282 container remove 3865614f9ecaa1a2f1f3e778d98d362a744f4b4c39cda4da10dd3b8ae5153e8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_golick, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:42:55 compute-0 systemd[1]: libpod-conmon-3865614f9ecaa1a2f1f3e778d98d362a744f4b4c39cda4da10dd3b8ae5153e8c.scope: Deactivated successfully.
Dec 15 10:42:55 compute-0 podman[107370]: 2025-12-15 10:42:55.835323879 +0000 UTC m=+0.040837740 container create 91914488392d20e2840acc778b4c5bde5899f73f29cf35cddd082d3789527996 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:42:55 compute-0 systemd[1]: Started libpod-conmon-91914488392d20e2840acc778b4c5bde5899f73f29cf35cddd082d3789527996.scope.
Dec 15 10:42:55 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee63a35e564aa8f4d8511450f8483ab9217901215099bb41f8e84898e2a7b720/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee63a35e564aa8f4d8511450f8483ab9217901215099bb41f8e84898e2a7b720/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee63a35e564aa8f4d8511450f8483ab9217901215099bb41f8e84898e2a7b720/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee63a35e564aa8f4d8511450f8483ab9217901215099bb41f8e84898e2a7b720/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:42:55 compute-0 podman[107370]: 2025-12-15 10:42:55.819835253 +0000 UTC m=+0.025349144 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:42:55 compute-0 podman[107370]: 2025-12-15 10:42:55.922255575 +0000 UTC m=+0.127769466 container init 91914488392d20e2840acc778b4c5bde5899f73f29cf35cddd082d3789527996 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_boyd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:42:55 compute-0 podman[107370]: 2025-12-15 10:42:55.930665844 +0000 UTC m=+0.136179715 container start 91914488392d20e2840acc778b4c5bde5899f73f29cf35cddd082d3789527996 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_boyd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 15 10:42:55 compute-0 podman[107370]: 2025-12-15 10:42:55.935682545 +0000 UTC m=+0.141196456 container attach 91914488392d20e2840acc778b4c5bde5899f73f29cf35cddd082d3789527996 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 15 10:42:56 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v135: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 15 10:42:56 compute-0 elegant_boyd[107387]: {
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:     "0": [
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:         {
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:             "devices": [
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:                 "/dev/loop3"
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:             ],
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:             "lv_name": "ceph_lv0",
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:             "lv_size": "21470642176",
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MCSOGR-6V69-UVGq-8FpK-DBjb-LyDE-FLUPxJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=77365f67-614e-5a8d-b658-640395550c79,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7690eca0-4e87-4157-a045-1912448da925,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:             "lv_uuid": "MCSOGR-6V69-UVGq-8FpK-DBjb-LyDE-FLUPxJ",
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:             "name": "ceph_lv0",
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:             "tags": {
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:                 "ceph.block_uuid": "MCSOGR-6V69-UVGq-8FpK-DBjb-LyDE-FLUPxJ",
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:                 "ceph.cephx_lockbox_secret": "",
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:                 "ceph.cluster_fsid": "77365f67-614e-5a8d-b658-640395550c79",
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:                 "ceph.cluster_name": "ceph",
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:                 "ceph.crush_device_class": "",
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:                 "ceph.encrypted": "0",
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:                 "ceph.osd_fsid": "7690eca0-4e87-4157-a045-1912448da925",
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:                 "ceph.osd_id": "0",
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:                 "ceph.type": "block",
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:                 "ceph.vdo": "0",
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:                 "ceph.with_tpm": "0"
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:             },
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:             "type": "block",
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:             "vg_name": "ceph_vg0"
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:         }
Dec 15 10:42:56 compute-0 elegant_boyd[107387]:     ]
Dec 15 10:42:56 compute-0 elegant_boyd[107387]: }
Dec 15 10:42:56 compute-0 systemd[1]: libpod-91914488392d20e2840acc778b4c5bde5899f73f29cf35cddd082d3789527996.scope: Deactivated successfully.
Dec 15 10:42:56 compute-0 podman[107370]: 2025-12-15 10:42:56.273775302 +0000 UTC m=+0.479289173 container died 91914488392d20e2840acc778b4c5bde5899f73f29cf35cddd082d3789527996 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_boyd, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 15 10:42:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee63a35e564aa8f4d8511450f8483ab9217901215099bb41f8e84898e2a7b720-merged.mount: Deactivated successfully.
Dec 15 10:42:56 compute-0 podman[107370]: 2025-12-15 10:42:56.315719586 +0000 UTC m=+0.521233457 container remove 91914488392d20e2840acc778b4c5bde5899f73f29cf35cddd082d3789527996 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_boyd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 15 10:42:56 compute-0 systemd[1]: libpod-conmon-91914488392d20e2840acc778b4c5bde5899f73f29cf35cddd082d3789527996.scope: Deactivated successfully.
Dec 15 10:42:56 compute-0 sudo[107263]: pam_unix(sudo:session): session closed for user root
Dec 15 10:42:56 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:56 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c40041c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:56 compute-0 sudo[107408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:42:56 compute-0 sudo[107408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:42:56 compute-0 sudo[107408]: pam_unix(sudo:session): session closed for user root
Dec 15 10:42:56 compute-0 sudo[107434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 77365f67-614e-5a8d-b658-640395550c79 -- raw list --format json
Dec 15 10:42:56 compute-0 sudo[107434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:42:56 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:56 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:42:56 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:56.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:42:56 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:56 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:56 compute-0 ceph-mon[74356]: pgmap v135: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 15 10:42:56 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:56 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:42:56 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:56.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:42:56 compute-0 podman[107500]: 2025-12-15 10:42:56.954335225 +0000 UTC m=+0.057962150 container create a814f4264d364691bef04eb8322b4f5854cf3615bdec759bb2de1c3f561e6b84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_saha, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:42:57 compute-0 systemd[1]: Started libpod-conmon-a814f4264d364691bef04eb8322b4f5854cf3615bdec759bb2de1c3f561e6b84.scope.
Dec 15 10:42:57 compute-0 podman[107500]: 2025-12-15 10:42:56.926677587 +0000 UTC m=+0.030304592 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:42:57 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:42:57 compute-0 podman[107500]: 2025-12-15 10:42:57.042833911 +0000 UTC m=+0.146460796 container init a814f4264d364691bef04eb8322b4f5854cf3615bdec759bb2de1c3f561e6b84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_saha, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:42:57 compute-0 podman[107500]: 2025-12-15 10:42:57.049569147 +0000 UTC m=+0.153196032 container start a814f4264d364691bef04eb8322b4f5854cf3615bdec759bb2de1c3f561e6b84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_saha, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:42:57 compute-0 podman[107500]: 2025-12-15 10:42:57.053285406 +0000 UTC m=+0.156912361 container attach a814f4264d364691bef04eb8322b4f5854cf3615bdec759bb2de1c3f561e6b84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_saha, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:42:57 compute-0 systemd[1]: libpod-a814f4264d364691bef04eb8322b4f5854cf3615bdec759bb2de1c3f561e6b84.scope: Deactivated successfully.
Dec 15 10:42:57 compute-0 nice_saha[107516]: 167 167
Dec 15 10:42:57 compute-0 conmon[107516]: conmon a814f4264d364691bef0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a814f4264d364691bef04eb8322b4f5854cf3615bdec759bb2de1c3f561e6b84.scope/container/memory.events
Dec 15 10:42:57 compute-0 podman[107500]: 2025-12-15 10:42:57.054988831 +0000 UTC m=+0.158615716 container died a814f4264d364691bef04eb8322b4f5854cf3615bdec759bb2de1c3f561e6b84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_saha, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 15 10:42:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c818e9228f911066f8a5dd1525acfb02b0ff0ce281735528d075319386d17ff-merged.mount: Deactivated successfully.
Dec 15 10:42:57 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:42:57 compute-0 podman[107500]: 2025-12-15 10:42:57.176472344 +0000 UTC m=+0.280099239 container remove a814f4264d364691bef04eb8322b4f5854cf3615bdec759bb2de1c3f561e6b84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_saha, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:42:57 compute-0 systemd[1]: libpod-conmon-a814f4264d364691bef04eb8322b4f5854cf3615bdec759bb2de1c3f561e6b84.scope: Deactivated successfully.
Dec 15 10:42:57 compute-0 podman[107542]: 2025-12-15 10:42:57.343142846 +0000 UTC m=+0.039228238 container create 222e5f6004485eab752cbdd031eb9574124822a0decf52a38395b3293b72bcaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_williamson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:42:57 compute-0 systemd[1]: Started libpod-conmon-222e5f6004485eab752cbdd031eb9574124822a0decf52a38395b3293b72bcaa.scope.
Dec 15 10:42:57 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:57 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:57 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:42:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc27345b4ef5f8dbedd428618408f604ff82c2eb619b6ce73fdcc15dffd2977a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:42:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc27345b4ef5f8dbedd428618408f604ff82c2eb619b6ce73fdcc15dffd2977a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:42:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc27345b4ef5f8dbedd428618408f604ff82c2eb619b6ce73fdcc15dffd2977a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:42:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc27345b4ef5f8dbedd428618408f604ff82c2eb619b6ce73fdcc15dffd2977a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:42:57 compute-0 podman[107542]: 2025-12-15 10:42:57.32705223 +0000 UTC m=+0.023137632 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:42:57 compute-0 podman[107542]: 2025-12-15 10:42:57.431603271 +0000 UTC m=+0.127688663 container init 222e5f6004485eab752cbdd031eb9574124822a0decf52a38395b3293b72bcaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_williamson, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:42:57 compute-0 podman[107542]: 2025-12-15 10:42:57.439909927 +0000 UTC m=+0.135995309 container start 222e5f6004485eab752cbdd031eb9574124822a0decf52a38395b3293b72bcaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 15 10:42:57 compute-0 podman[107542]: 2025-12-15 10:42:57.442761329 +0000 UTC m=+0.138846731 container attach 222e5f6004485eab752cbdd031eb9574124822a0decf52a38395b3293b72bcaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_williamson, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [balancer INFO root] Optimize plan auto_2025-12-15_10:42:58
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [balancer INFO root] do_upmap
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'images', 'vms', 'default.rgw.meta', 'default.rgw.log', 'backups', '.nfs', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'volumes']
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 15 10:42:58 compute-0 lvm[107634]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 15 10:42:58 compute-0 lvm[107634]: VG ceph_vg0 finished
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v136: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:42:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 15 10:42:58 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:42:58 compute-0 keen_williamson[107558]: {}
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:42:58 compute-0 systemd[1]: libpod-222e5f6004485eab752cbdd031eb9574124822a0decf52a38395b3293b72bcaa.scope: Deactivated successfully.
Dec 15 10:42:58 compute-0 podman[107542]: 2025-12-15 10:42:58.235076613 +0000 UTC m=+0.931162005 container died 222e5f6004485eab752cbdd031eb9574124822a0decf52a38395b3293b72bcaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_williamson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:42:58 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:42:58 compute-0 systemd[1]: libpod-222e5f6004485eab752cbdd031eb9574124822a0decf52a38395b3293b72bcaa.scope: Consumed 1.374s CPU time.
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:42:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc27345b4ef5f8dbedd428618408f604ff82c2eb619b6ce73fdcc15dffd2977a-merged.mount: Deactivated successfully.
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 15 10:42:58 compute-0 podman[107542]: 2025-12-15 10:42:58.280566732 +0000 UTC m=+0.976652124 container remove 222e5f6004485eab752cbdd031eb9574124822a0decf52a38395b3293b72bcaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_williamson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 15 10:42:58 compute-0 systemd[1]: libpod-conmon-222e5f6004485eab752cbdd031eb9574124822a0decf52a38395b3293b72bcaa.scope: Deactivated successfully.
Dec 15 10:42:58 compute-0 sudo[107434]: pam_unix(sudo:session): session closed for user root
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 15 10:42:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 15 10:42:58 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:42:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:42:58 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:42:58 compute-0 sudo[107649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 15 10:42:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:58 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4001d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:58 compute-0 sudo[107649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:42:58 compute-0 sudo[107649]: pam_unix(sudo:session): session closed for user root
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:42:58 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:42:58 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:58 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:42:58 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:42:58.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:42:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:58 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c40041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:42:58 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:42:58 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:42:58 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:42:58.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:42:59 compute-0 ceph-mon[74356]: pgmap v136: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:42:59 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:42:59 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:42:59 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:42:59 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c40041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:00 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v137: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:43:00 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:00 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:00 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:00 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:43:00 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:00.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:43:00 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:00 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc00be60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:00 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:00 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:43:00 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:00.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:43:01 compute-0 ceph-mon[74356]: pgmap v137: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:43:01 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:01 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4001d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:02 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:43:02 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v138: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:43:02 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:02 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4001d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:02 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:02 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:02 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:02.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:02 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:02 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:02 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:02 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:02 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:02.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:02 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:43:02] "GET /metrics HTTP/1.1" 200 48231 "" "Prometheus/2.51.0"
Dec 15 10:43:02 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:43:02] "GET /metrics HTTP/1.1" 200 48231 "" "Prometheus/2.51.0"
Dec 15 10:43:03 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:03 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4001d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:03 compute-0 ceph-mon[74356]: pgmap v138: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:43:04 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v139: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:43:04 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:04 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4004220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:04 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:04 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:43:04 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:04.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:43:04 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:04 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc00be60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:04 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:04 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:43:04 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:04.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:43:05 compute-0 sudo[107704]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdpesfgpanagzpdmgpilrpjhxosjbuks ; /usr/bin/python3'
Dec 15 10:43:05 compute-0 sudo[107704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:43:05 compute-0 python3[107706]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:43:05 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:05 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc00be60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:05 compute-0 podman[107708]: 2025-12-15 10:43:05.412478747 +0000 UTC m=+0.064652814 container create 30cbcd7f68d43b283a35789f14303f2b221766b28b6dd3bb22918d991045c3a4 (image=quay.io/ceph/ceph:v19, name=gifted_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 15 10:43:05 compute-0 systemd[1]: Started libpod-conmon-30cbcd7f68d43b283a35789f14303f2b221766b28b6dd3bb22918d991045c3a4.scope.
Dec 15 10:43:05 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:43:05 compute-0 podman[107708]: 2025-12-15 10:43:05.387747414 +0000 UTC m=+0.039921511 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:43:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5502dbbd42f0ac5cee4115f978d79a7c59ee2dae3b4d83f5017d281ddbad504/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:43:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5502dbbd42f0ac5cee4115f978d79a7c59ee2dae3b4d83f5017d281ddbad504/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:43:05 compute-0 podman[107708]: 2025-12-15 10:43:05.502498612 +0000 UTC m=+0.154672699 container init 30cbcd7f68d43b283a35789f14303f2b221766b28b6dd3bb22918d991045c3a4 (image=quay.io/ceph/ceph:v19, name=gifted_mendeleev, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:43:05 compute-0 podman[107708]: 2025-12-15 10:43:05.509858699 +0000 UTC m=+0.162032786 container start 30cbcd7f68d43b283a35789f14303f2b221766b28b6dd3bb22918d991045c3a4 (image=quay.io/ceph/ceph:v19, name=gifted_mendeleev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:43:05 compute-0 podman[107708]: 2025-12-15 10:43:05.514040292 +0000 UTC m=+0.166214399 container attach 30cbcd7f68d43b283a35789f14303f2b221766b28b6dd3bb22918d991045c3a4 (image=quay.io/ceph/ceph:v19, name=gifted_mendeleev, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:43:05 compute-0 gifted_mendeleev[107723]: ERROR: invalid flag --daemon-type
Dec 15 10:43:05 compute-0 systemd[1]: libpod-30cbcd7f68d43b283a35789f14303f2b221766b28b6dd3bb22918d991045c3a4.scope: Deactivated successfully.
Dec 15 10:43:05 compute-0 podman[107708]: 2025-12-15 10:43:05.59416224 +0000 UTC m=+0.246336307 container died 30cbcd7f68d43b283a35789f14303f2b221766b28b6dd3bb22918d991045c3a4 (image=quay.io/ceph/ceph:v19, name=gifted_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:43:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5502dbbd42f0ac5cee4115f978d79a7c59ee2dae3b4d83f5017d281ddbad504-merged.mount: Deactivated successfully.
Dec 15 10:43:05 compute-0 systemd[90580]: Created slice User Background Tasks Slice.
Dec 15 10:43:05 compute-0 podman[107708]: 2025-12-15 10:43:05.633540712 +0000 UTC m=+0.285714779 container remove 30cbcd7f68d43b283a35789f14303f2b221766b28b6dd3bb22918d991045c3a4 (image=quay.io/ceph/ceph:v19, name=gifted_mendeleev, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 15 10:43:05 compute-0 systemd[90580]: Starting Cleanup of User's Temporary Files and Directories...
Dec 15 10:43:05 compute-0 systemd[1]: libpod-conmon-30cbcd7f68d43b283a35789f14303f2b221766b28b6dd3bb22918d991045c3a4.scope: Deactivated successfully.
Dec 15 10:43:05 compute-0 sudo[107704]: pam_unix(sudo:session): session closed for user root
Dec 15 10:43:05 compute-0 ceph-mon[74356]: pgmap v139: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:43:05 compute-0 systemd[90580]: Finished Cleanup of User's Temporary Files and Directories.
Dec 15 10:43:05 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-haproxy-nfs-cephfs-compute-0-ykblqa[95047]: [WARNING] 348/104305 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 15 10:43:06 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v140: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 15 10:43:06 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:06 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:06 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:06 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:43:06 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:06.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:43:06 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:06 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4004240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:06 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:06 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:06 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:06.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:06 compute-0 ceph-mon[74356]: pgmap v140: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 15 10:43:07 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:43:07 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:07 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:08 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v141: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 15 10:43:08 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:08 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc00be60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:08 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:08 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:43:08 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:08.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:43:08 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:08 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:08 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:08 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:08 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:08.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:09 compute-0 ceph-mon[74356]: pgmap v141: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 15 10:43:09 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:09 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4004260 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:10 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v142: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 15 10:43:10 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:10 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:10 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:10 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:43:10 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:10.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:43:10 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:10 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc00be60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:10 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:10 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:10 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:10.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:11 compute-0 ceph-mon[74356]: pgmap v142: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 15 10:43:11 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:11 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:12 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:43:12 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v143: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 15 10:43:12 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:12 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4004280 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:12 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:12 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:43:12 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:12.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:43:12 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:12 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:12 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:12 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:12 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:12.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:12 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:43:12] "GET /metrics HTTP/1.1" 200 48231 "" "Prometheus/2.51.0"
Dec 15 10:43:12 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:43:12] "GET /metrics HTTP/1.1" 200 48231 "" "Prometheus/2.51.0"
Dec 15 10:43:13 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 15 10:43:13 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:43:13 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:13 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc00be60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:13 compute-0 ceph-mon[74356]: pgmap v143: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 15 10:43:13 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:43:14 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v144: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 15 10:43:14 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:14 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:14 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:14 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:43:14 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:14.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:43:14 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:14 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c40042a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:14 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:14 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:43:14 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:14.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:43:15 compute-0 sudo[107762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 15 10:43:15 compute-0 sudo[107762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:43:15 compute-0 sudo[107762]: pam_unix(sudo:session): session closed for user root
Dec 15 10:43:15 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:15 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 15 10:43:15 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:15 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:15 compute-0 ceph-mon[74356]: pgmap v144: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 15 10:43:15 compute-0 sudo[107811]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcrdwolvcvskimgklspcijujoxedfehm ; /usr/bin/python3'
Dec 15 10:43:15 compute-0 sudo[107811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:43:15 compute-0 python3[107813]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:43:15 compute-0 podman[107814]: 2025-12-15 10:43:15.927884747 +0000 UTC m=+0.053272069 container create 070aaf3ab6c0db3c50360fc95292b05f8e1ec4ccf560cf5dc81fa642bf9c62e2 (image=quay.io/ceph/ceph:v19, name=optimistic_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:43:15 compute-0 systemd[1]: Started libpod-conmon-070aaf3ab6c0db3c50360fc95292b05f8e1ec4ccf560cf5dc81fa642bf9c62e2.scope.
Dec 15 10:43:15 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:43:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3da0edca685e10a2824cfc6e71752041043aad6f407e830b470252a45d64bacb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:43:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3da0edca685e10a2824cfc6e71752041043aad6f407e830b470252a45d64bacb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:43:15 compute-0 podman[107814]: 2025-12-15 10:43:15.903088952 +0000 UTC m=+0.028476254 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:43:16 compute-0 podman[107814]: 2025-12-15 10:43:16.006171406 +0000 UTC m=+0.131558698 container init 070aaf3ab6c0db3c50360fc95292b05f8e1ec4ccf560cf5dc81fa642bf9c62e2 (image=quay.io/ceph/ceph:v19, name=optimistic_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 15 10:43:16 compute-0 podman[107814]: 2025-12-15 10:43:16.015177715 +0000 UTC m=+0.140565007 container start 070aaf3ab6c0db3c50360fc95292b05f8e1ec4ccf560cf5dc81fa642bf9c62e2 (image=quay.io/ceph/ceph:v19, name=optimistic_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True)
Dec 15 10:43:16 compute-0 podman[107814]: 2025-12-15 10:43:16.018416469 +0000 UTC m=+0.143803771 container attach 070aaf3ab6c0db3c50360fc95292b05f8e1ec4ccf560cf5dc81fa642bf9c62e2 (image=quay.io/ceph/ceph:v19, name=optimistic_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:43:16 compute-0 optimistic_napier[107829]: ERROR: invalid flag --daemon-type
Dec 15 10:43:16 compute-0 systemd[1]: libpod-070aaf3ab6c0db3c50360fc95292b05f8e1ec4ccf560cf5dc81fa642bf9c62e2.scope: Deactivated successfully.
Dec 15 10:43:16 compute-0 podman[107849]: 2025-12-15 10:43:16.118252798 +0000 UTC m=+0.033999530 container died 070aaf3ab6c0db3c50360fc95292b05f8e1ec4ccf560cf5dc81fa642bf9c62e2 (image=quay.io/ceph/ceph:v19, name=optimistic_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 15 10:43:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-3da0edca685e10a2824cfc6e71752041043aad6f407e830b470252a45d64bacb-merged.mount: Deactivated successfully.
Dec 15 10:43:16 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v145: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Dec 15 10:43:16 compute-0 podman[107849]: 2025-12-15 10:43:16.160514293 +0000 UTC m=+0.076261035 container remove 070aaf3ab6c0db3c50360fc95292b05f8e1ec4ccf560cf5dc81fa642bf9c62e2 (image=quay.io/ceph/ceph:v19, name=optimistic_napier, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 15 10:43:16 compute-0 systemd[1]: libpod-conmon-070aaf3ab6c0db3c50360fc95292b05f8e1ec4ccf560cf5dc81fa642bf9c62e2.scope: Deactivated successfully.
Dec 15 10:43:16 compute-0 sudo[107811]: pam_unix(sudo:session): session closed for user root
Dec 15 10:43:16 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:16 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:16 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:16 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:43:16 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:16.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:43:16 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:16 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc00be60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:16 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:16 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:43:16 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:16.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:43:17 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:43:17 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:17 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc00be60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:17 compute-0 ceph-mon[74356]: pgmap v145: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Dec 15 10:43:18 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v146: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 15 10:43:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:18 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:18 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 15 10:43:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:18 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 15 10:43:18 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:18 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:18 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:18.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:18 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:18 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:18 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:18 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:18.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:19 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:19 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c80016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:19 compute-0 ceph-mon[74356]: pgmap v146: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 15 10:43:20 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v147: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 15 10:43:20 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:20 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc00be60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:20 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:20 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:20 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:20.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:20 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:20 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:20 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:20 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:20 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:20.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:21 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:21 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 15 10:43:21 compute-0 ceph-mon[74356]: pgmap v147: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 15 10:43:22 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:43:22 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v148: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 15 10:43:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:22 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c80016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:22 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:22 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:22 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:22.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:22 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc00be60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:22 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:22 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:22 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:22.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:43:22] "GET /metrics HTTP/1.1" 200 48230 "" "Prometheus/2.51.0"
Dec 15 10:43:22 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:43:22] "GET /metrics HTTP/1.1" 200 48230 "" "Prometheus/2.51.0"
Dec 15 10:43:22 compute-0 ceph-mon[74356]: pgmap v148: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 15 10:43:23 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:23 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:24 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v149: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 15 10:43:24 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:24 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:24 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:24 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:24 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:24.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:24 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:24 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c80016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:24 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:24 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:24 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:24.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:25 compute-0 ceph-mon[74356]: pgmap v149: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 15 10:43:25 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:25 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc00be60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:26 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v150: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 15 10:43:26 compute-0 sudo[107898]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wweaixddaqsqfxgdkukxrwbmduezvdrl ; /usr/bin/python3'
Dec 15 10:43:26 compute-0 sudo[107898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:43:26 compute-0 python3[107900]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:43:26 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:26 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:26 compute-0 podman[107901]: 2025-12-15 10:43:26.52512429 +0000 UTC m=+0.063951561 container create 7e8960b2555b76459b45e6ce432ab990c707efc242f77852f9fbf9be5ee7ce3f (image=quay.io/ceph/ceph:v19, name=gracious_spence, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 15 10:43:26 compute-0 systemd[1]: Started libpod-conmon-7e8960b2555b76459b45e6ce432ab990c707efc242f77852f9fbf9be5ee7ce3f.scope.
Dec 15 10:43:26 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d6c59bee2dcda59b191afcc19b6eaa49bb39a580c22ab12d211d8732138cce5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d6c59bee2dcda59b191afcc19b6eaa49bb39a580c22ab12d211d8732138cce5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:43:26 compute-0 podman[107901]: 2025-12-15 10:43:26.500113158 +0000 UTC m=+0.038940429 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:43:26 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:26 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:26 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:26.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:26 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:26 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:26 compute-0 podman[107901]: 2025-12-15 10:43:26.683576748 +0000 UTC m=+0.222404029 container init 7e8960b2555b76459b45e6ce432ab990c707efc242f77852f9fbf9be5ee7ce3f (image=quay.io/ceph/ceph:v19, name=gracious_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:43:26 compute-0 podman[107901]: 2025-12-15 10:43:26.691020837 +0000 UTC m=+0.229848078 container start 7e8960b2555b76459b45e6ce432ab990c707efc242f77852f9fbf9be5ee7ce3f (image=quay.io/ceph/ceph:v19, name=gracious_spence, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 15 10:43:26 compute-0 podman[107901]: 2025-12-15 10:43:26.696896316 +0000 UTC m=+0.235723567 container attach 7e8960b2555b76459b45e6ce432ab990c707efc242f77852f9fbf9be5ee7ce3f (image=quay.io/ceph/ceph:v19, name=gracious_spence, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:43:26 compute-0 gracious_spence[107917]: ERROR: invalid flag --daemon-type
Dec 15 10:43:26 compute-0 systemd[1]: libpod-7e8960b2555b76459b45e6ce432ab990c707efc242f77852f9fbf9be5ee7ce3f.scope: Deactivated successfully.
Dec 15 10:43:26 compute-0 podman[107901]: 2025-12-15 10:43:26.740631177 +0000 UTC m=+0.279458408 container died 7e8960b2555b76459b45e6ce432ab990c707efc242f77852f9fbf9be5ee7ce3f (image=quay.io/ceph/ceph:v19, name=gracious_spence, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:43:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d6c59bee2dcda59b191afcc19b6eaa49bb39a580c22ab12d211d8732138cce5-merged.mount: Deactivated successfully.
Dec 15 10:43:26 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:26 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:26 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:26.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:26 compute-0 podman[107901]: 2025-12-15 10:43:26.817667316 +0000 UTC m=+0.356494567 container remove 7e8960b2555b76459b45e6ce432ab990c707efc242f77852f9fbf9be5ee7ce3f (image=quay.io/ceph/ceph:v19, name=gracious_spence, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 15 10:43:26 compute-0 systemd[1]: libpod-conmon-7e8960b2555b76459b45e6ce432ab990c707efc242f77852f9fbf9be5ee7ce3f.scope: Deactivated successfully.
Dec 15 10:43:26 compute-0 sudo[107898]: pam_unix(sudo:session): session closed for user root
Dec 15 10:43:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:43:27 compute-0 ceph-mon[74356]: pgmap v150: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 15 10:43:27 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:27 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8002a60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:27 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-haproxy-nfs-cephfs-compute-0-ykblqa[95047]: [WARNING] 348/104327 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 15 10:43:28 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v151: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec 15 10:43:28 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 15 10:43:28 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:43:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:43:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:43:28 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:43:28 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:28 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8002a60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:43:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:43:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:43:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:43:28 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:28 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:28 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:28.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:28 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:28 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0002920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:28 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:28 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:28 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:28.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:29 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:29 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:29 compute-0 ceph-mon[74356]: pgmap v151: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec 15 10:43:30 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v152: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec 15 10:43:30 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:30 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8002a60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:30 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:30 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:30 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:30.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:30 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:30 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:30 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:30 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:30 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:30.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:31 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:31 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0002920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:31 compute-0 ceph-mon[74356]: pgmap v152: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec 15 10:43:32 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:43:32 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v153: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 15 10:43:32 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:32 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:32 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:32 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:32 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:32.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:32 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:32 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8002a60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:32 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:32 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:32 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:32.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:32 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:43:32] "GET /metrics HTTP/1.1" 200 48233 "" "Prometheus/2.51.0"
Dec 15 10:43:32 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:43:32] "GET /metrics HTTP/1.1" 200 48233 "" "Prometheus/2.51.0"
Dec 15 10:43:33 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:33 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:33 compute-0 ceph-mon[74356]: pgmap v153: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 15 10:43:34 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v154: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 15 10:43:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:34 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0002920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:34 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:34 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:43:34 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:34.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:43:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:34 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:34 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:34 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:34 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:34.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:35 compute-0 sudo[107961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 15 10:43:35 compute-0 sudo[107961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:43:35 compute-0 sudo[107961]: pam_unix(sudo:session): session closed for user root
Dec 15 10:43:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:35 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8002c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:35 compute-0 ceph-mon[74356]: pgmap v154: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 15 10:43:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-haproxy-nfs-cephfs-compute-0-ykblqa[95047]: [WARNING] 348/104335 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 15 10:43:36 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v155: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 15 10:43:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:36 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:36 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:36 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:36 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:36.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:36 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0002920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:36 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:36 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:43:36 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:36.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:43:36 compute-0 ceph-mon[74356]: pgmap v155: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 15 10:43:36 compute-0 sudo[108011]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xojumkvddojswizriuozawwbcklsuvgs ; /usr/bin/python3'
Dec 15 10:43:36 compute-0 sudo[108011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:43:37 compute-0 python3[108013]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:43:37 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:43:37 compute-0 podman[108014]: 2025-12-15 10:43:37.100429398 +0000 UTC m=+0.024697653 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:43:37 compute-0 podman[108014]: 2025-12-15 10:43:37.265816178 +0000 UTC m=+0.190084413 container create b0da4043fbe4b40197877fa22e0a902ad5e77bbb6d5a62068f5e9dca62de47ca (image=quay.io/ceph/ceph:v19, name=adoring_booth, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec 15 10:43:37 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:37 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:37 compute-0 systemd[1]: Started libpod-conmon-b0da4043fbe4b40197877fa22e0a902ad5e77bbb6d5a62068f5e9dca62de47ca.scope.
Dec 15 10:43:37 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac8483b0804c32521c7f8cbadbfee5335cc4e9fc03c76942423b72789bee3228/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac8483b0804c32521c7f8cbadbfee5335cc4e9fc03c76942423b72789bee3228/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:43:37 compute-0 podman[108014]: 2025-12-15 10:43:37.503382533 +0000 UTC m=+0.427650778 container init b0da4043fbe4b40197877fa22e0a902ad5e77bbb6d5a62068f5e9dca62de47ca (image=quay.io/ceph/ceph:v19, name=adoring_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 15 10:43:37 compute-0 podman[108014]: 2025-12-15 10:43:37.510034016 +0000 UTC m=+0.434302251 container start b0da4043fbe4b40197877fa22e0a902ad5e77bbb6d5a62068f5e9dca62de47ca (image=quay.io/ceph/ceph:v19, name=adoring_booth, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 15 10:43:37 compute-0 adoring_booth[108030]: ERROR: invalid flag --daemon-type
Dec 15 10:43:37 compute-0 podman[108014]: 2025-12-15 10:43:37.557394624 +0000 UTC m=+0.481662889 container attach b0da4043fbe4b40197877fa22e0a902ad5e77bbb6d5a62068f5e9dca62de47ca (image=quay.io/ceph/ceph:v19, name=adoring_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 15 10:43:37 compute-0 systemd[1]: libpod-b0da4043fbe4b40197877fa22e0a902ad5e77bbb6d5a62068f5e9dca62de47ca.scope: Deactivated successfully.
Dec 15 10:43:37 compute-0 podman[108014]: 2025-12-15 10:43:37.559835102 +0000 UTC m=+0.484103337 container died b0da4043fbe4b40197877fa22e0a902ad5e77bbb6d5a62068f5e9dca62de47ca (image=quay.io/ceph/ceph:v19, name=adoring_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 15 10:43:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac8483b0804c32521c7f8cbadbfee5335cc4e9fc03c76942423b72789bee3228-merged.mount: Deactivated successfully.
Dec 15 10:43:37 compute-0 podman[108014]: 2025-12-15 10:43:37.68364471 +0000 UTC m=+0.607912945 container remove b0da4043fbe4b40197877fa22e0a902ad5e77bbb6d5a62068f5e9dca62de47ca (image=quay.io/ceph/ceph:v19, name=adoring_booth, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 15 10:43:37 compute-0 sudo[108011]: pam_unix(sudo:session): session closed for user root
Dec 15 10:43:37 compute-0 systemd[1]: libpod-conmon-b0da4043fbe4b40197877fa22e0a902ad5e77bbb6d5a62068f5e9dca62de47ca.scope: Deactivated successfully.
Dec 15 10:43:37 compute-0 sudo[108084]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjkeajnixqffditbsfmhojqmnlnhyhym ; /usr/bin/python3'
Dec 15 10:43:37 compute-0 sudo[108084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:43:38 compute-0 podman[108087]: 2025-12-15 10:43:38.041340864 +0000 UTC m=+0.025107115 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:43:38 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v156: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 15 10:43:38 compute-0 podman[108087]: 2025-12-15 10:43:38.183000224 +0000 UTC m=+0.166766455 container create cadacfb536e14717db116244617d3cdbaf654e7a47f86f6c7d7da442d5163551 (image=quay.io/ceph/ceph:v19, name=goofy_feistel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 15 10:43:38 compute-0 systemd[1]: Started libpod-conmon-cadacfb536e14717db116244617d3cdbaf654e7a47f86f6c7d7da442d5163551.scope.
Dec 15 10:43:38 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:43:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e6fb8b81d8a28bf89100b5c57d9bb5ffadf0c9ca422a25d8f9596a6bb59a814/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:43:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e6fb8b81d8a28bf89100b5c57d9bb5ffadf0c9ca422a25d8f9596a6bb59a814/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:43:38 compute-0 podman[108087]: 2025-12-15 10:43:38.385777354 +0000 UTC m=+0.369543595 container init cadacfb536e14717db116244617d3cdbaf654e7a47f86f6c7d7da442d5163551 (image=quay.io/ceph/ceph:v19, name=goofy_feistel, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:43:38 compute-0 podman[108087]: 2025-12-15 10:43:38.391952012 +0000 UTC m=+0.375718243 container start cadacfb536e14717db116244617d3cdbaf654e7a47f86f6c7d7da442d5163551 (image=quay.io/ceph/ceph:v19, name=goofy_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:43:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:38 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8002c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:38 compute-0 podman[108087]: 2025-12-15 10:43:38.470551561 +0000 UTC m=+0.454317792 container attach cadacfb536e14717db116244617d3cdbaf654e7a47f86f6c7d7da442d5163551 (image=quay.io/ceph/ceph:v19, name=goofy_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:43:38 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:38 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:43:38 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:38.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:43:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:38 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:38 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:38 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:43:38 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:38.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:43:38 compute-0 goofy_feistel[108102]: could not fetch user info: no user info saved
Dec 15 10:43:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:39 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e00036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:39 compute-0 ceph-mon[74356]: pgmap v156: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 15 10:43:39 compute-0 systemd[1]: libpod-cadacfb536e14717db116244617d3cdbaf654e7a47f86f6c7d7da442d5163551.scope: Deactivated successfully.
Dec 15 10:43:39 compute-0 podman[108087]: 2025-12-15 10:43:39.832667159 +0000 UTC m=+1.816433390 container died cadacfb536e14717db116244617d3cdbaf654e7a47f86f6c7d7da442d5163551 (image=quay.io/ceph/ceph:v19, name=goofy_feistel, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 15 10:43:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e6fb8b81d8a28bf89100b5c57d9bb5ffadf0c9ca422a25d8f9596a6bb59a814-merged.mount: Deactivated successfully.
Dec 15 10:43:39 compute-0 podman[108087]: 2025-12-15 10:43:39.869014924 +0000 UTC m=+1.852781155 container remove cadacfb536e14717db116244617d3cdbaf654e7a47f86f6c7d7da442d5163551 (image=quay.io/ceph/ceph:v19, name=goofy_feistel, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 15 10:43:39 compute-0 systemd[1]: libpod-conmon-cadacfb536e14717db116244617d3cdbaf654e7a47f86f6c7d7da442d5163551.scope: Deactivated successfully.
Dec 15 10:43:39 compute-0 sudo[108084]: pam_unix(sudo:session): session closed for user root
Dec 15 10:43:40 compute-0 sudo[108225]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khdobuqrlolzdewfxcxmhpnyyxqnnnjl ; /usr/bin/python3'
Dec 15 10:43:40 compute-0 sudo[108225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:43:40 compute-0 python3[108227]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 77365f67-614e-5a8d-b658-640395550c79 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="glance" --display-name="Glance S3 User" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 15 10:43:40 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v157: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 15 10:43:40 compute-0 podman[108228]: 2025-12-15 10:43:40.225313983 +0000 UTC m=+0.062825134 container create f01f8d53d44673aaef225155689deeb2c83d619562284652f681f4c5678d235e (image=quay.io/ceph/ceph:v19, name=confident_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:43:40 compute-0 systemd[1]: Started libpod-conmon-f01f8d53d44673aaef225155689deeb2c83d619562284652f681f4c5678d235e.scope.
Dec 15 10:43:40 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fdb61089e67a8bab8ed89e22ad342158e2845e9f7396a4fe42f7799aa31248e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fdb61089e67a8bab8ed89e22ad342158e2845e9f7396a4fe42f7799aa31248e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:43:40 compute-0 podman[108228]: 2025-12-15 10:43:40.190445576 +0000 UTC m=+0.027956767 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:43:40 compute-0 podman[108228]: 2025-12-15 10:43:40.292116474 +0000 UTC m=+0.129627625 container init f01f8d53d44673aaef225155689deeb2c83d619562284652f681f4c5678d235e (image=quay.io/ceph/ceph:v19, name=confident_swirles, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 15 10:43:40 compute-0 podman[108228]: 2025-12-15 10:43:40.298678615 +0000 UTC m=+0.136189786 container start f01f8d53d44673aaef225155689deeb2c83d619562284652f681f4c5678d235e (image=quay.io/ceph/ceph:v19, name=confident_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:43:40 compute-0 podman[108228]: 2025-12-15 10:43:40.302655173 +0000 UTC m=+0.140166344 container attach f01f8d53d44673aaef225155689deeb2c83d619562284652f681f4c5678d235e (image=quay.io/ceph/ceph:v19, name=confident_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True)
Dec 15 10:43:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:40 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003470 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:40 compute-0 confident_swirles[108243]: {
Dec 15 10:43:40 compute-0 confident_swirles[108243]:     "user_id": "glance",
Dec 15 10:43:40 compute-0 confident_swirles[108243]:     "display_name": "Glance S3 User",
Dec 15 10:43:40 compute-0 confident_swirles[108243]:     "email": "",
Dec 15 10:43:40 compute-0 confident_swirles[108243]:     "suspended": 0,
Dec 15 10:43:40 compute-0 confident_swirles[108243]:     "max_buckets": 1000,
Dec 15 10:43:40 compute-0 confident_swirles[108243]:     "subusers": [],
Dec 15 10:43:40 compute-0 confident_swirles[108243]:     "keys": [
Dec 15 10:43:40 compute-0 confident_swirles[108243]:         {
Dec 15 10:43:40 compute-0 confident_swirles[108243]:             "user": "glance",
Dec 15 10:43:40 compute-0 confident_swirles[108243]:             "access_key": "2MICXJ9SRNL70JFND8YR",
Dec 15 10:43:40 compute-0 confident_swirles[108243]:             "secret_key": "VI1s6bvaFCZSpY6L2REFdAddhyO8BOTUVlycw4Rn",
Dec 15 10:43:40 compute-0 confident_swirles[108243]:             "active": true,
Dec 15 10:43:40 compute-0 confident_swirles[108243]:             "create_date": "2025-12-15T10:43:40.472759Z"
Dec 15 10:43:40 compute-0 confident_swirles[108243]:         }
Dec 15 10:43:40 compute-0 confident_swirles[108243]:     ],
Dec 15 10:43:40 compute-0 confident_swirles[108243]:     "swift_keys": [],
Dec 15 10:43:40 compute-0 confident_swirles[108243]:     "caps": [],
Dec 15 10:43:40 compute-0 confident_swirles[108243]:     "op_mask": "read, write, delete",
Dec 15 10:43:40 compute-0 confident_swirles[108243]:     "default_placement": "",
Dec 15 10:43:40 compute-0 confident_swirles[108243]:     "default_storage_class": "",
Dec 15 10:43:40 compute-0 confident_swirles[108243]:     "placement_tags": [],
Dec 15 10:43:40 compute-0 confident_swirles[108243]:     "bucket_quota": {
Dec 15 10:43:40 compute-0 confident_swirles[108243]:         "enabled": false,
Dec 15 10:43:40 compute-0 confident_swirles[108243]:         "check_on_raw": false,
Dec 15 10:43:40 compute-0 confident_swirles[108243]:         "max_size": -1,
Dec 15 10:43:40 compute-0 confident_swirles[108243]:         "max_size_kb": 0,
Dec 15 10:43:40 compute-0 confident_swirles[108243]:         "max_objects": -1
Dec 15 10:43:40 compute-0 confident_swirles[108243]:     },
Dec 15 10:43:40 compute-0 confident_swirles[108243]:     "user_quota": {
Dec 15 10:43:40 compute-0 confident_swirles[108243]:         "enabled": false,
Dec 15 10:43:40 compute-0 confident_swirles[108243]:         "check_on_raw": false,
Dec 15 10:43:40 compute-0 confident_swirles[108243]:         "max_size": -1,
Dec 15 10:43:40 compute-0 confident_swirles[108243]:         "max_size_kb": 0,
Dec 15 10:43:40 compute-0 confident_swirles[108243]:         "max_objects": -1
Dec 15 10:43:40 compute-0 confident_swirles[108243]:     },
Dec 15 10:43:40 compute-0 confident_swirles[108243]:     "temp_url_keys": [],
Dec 15 10:43:40 compute-0 confident_swirles[108243]:     "type": "rgw",
Dec 15 10:43:40 compute-0 confident_swirles[108243]:     "mfa_ids": [],
Dec 15 10:43:40 compute-0 confident_swirles[108243]:     "account_id": "",
Dec 15 10:43:40 compute-0 confident_swirles[108243]:     "path": "/",
Dec 15 10:43:40 compute-0 confident_swirles[108243]:     "create_date": "2025-12-15T10:43:40.471319Z",
Dec 15 10:43:40 compute-0 confident_swirles[108243]:     "tags": [],
Dec 15 10:43:40 compute-0 confident_swirles[108243]:     "group_ids": []
Dec 15 10:43:40 compute-0 confident_swirles[108243]: }
Dec 15 10:43:40 compute-0 confident_swirles[108243]: 
Dec 15 10:43:40 compute-0 systemd[1]: libpod-f01f8d53d44673aaef225155689deeb2c83d619562284652f681f4c5678d235e.scope: Deactivated successfully.
Dec 15 10:43:40 compute-0 podman[108228]: 2025-12-15 10:43:40.543408599 +0000 UTC m=+0.380919760 container died f01f8d53d44673aaef225155689deeb2c83d619562284652f681f4c5678d235e (image=quay.io/ceph/ceph:v19, name=confident_swirles, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0)
Dec 15 10:43:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fdb61089e67a8bab8ed89e22ad342158e2845e9f7396a4fe42f7799aa31248e-merged.mount: Deactivated successfully.
Dec 15 10:43:40 compute-0 podman[108228]: 2025-12-15 10:43:40.587709319 +0000 UTC m=+0.425220470 container remove f01f8d53d44673aaef225155689deeb2c83d619562284652f681f4c5678d235e (image=quay.io/ceph/ceph:v19, name=confident_swirles, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:43:40 compute-0 systemd[1]: libpod-conmon-f01f8d53d44673aaef225155689deeb2c83d619562284652f681f4c5678d235e.scope: Deactivated successfully.
Dec 15 10:43:40 compute-0 sudo[108225]: pam_unix(sudo:session): session closed for user root
Dec 15 10:43:40 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:40 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:40 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:40.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:40 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8002c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:40 compute-0 ceph-mon[74356]: pgmap v157: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 15 10:43:40 compute-0 sudo[108365]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgfqajawsjriabvcflyrfgvpeqmzcodj ; /usr/bin/python3'
Dec 15 10:43:40 compute-0 sudo[108365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:43:40 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:40 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:40 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:40.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:40 compute-0 podman[108368]: 2025-12-15 10:43:40.941144527 +0000 UTC m=+0.040294833 container create f798f90f5b5c5762636a8aba10f69149a482fabcb902a5492cacf54be3fb2e9b (image=quay.io/ceph/ceph:v19, name=objective_feynman, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:43:40 compute-0 systemd[1]: Started libpod-conmon-f798f90f5b5c5762636a8aba10f69149a482fabcb902a5492cacf54be3fb2e9b.scope.
Dec 15 10:43:41 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b20c9f624051d8560706cd481e55862e6d29a09df7d264111d9c214db603555/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b20c9f624051d8560706cd481e55862e6d29a09df7d264111d9c214db603555/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:43:41 compute-0 podman[108368]: 2025-12-15 10:43:40.92406821 +0000 UTC m=+0.023218546 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 15 10:43:41 compute-0 podman[108368]: 2025-12-15 10:43:41.024708765 +0000 UTC m=+0.123859081 container init f798f90f5b5c5762636a8aba10f69149a482fabcb902a5492cacf54be3fb2e9b (image=quay.io/ceph/ceph:v19, name=objective_feynman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Dec 15 10:43:41 compute-0 podman[108368]: 2025-12-15 10:43:41.031305286 +0000 UTC m=+0.130455592 container start f798f90f5b5c5762636a8aba10f69149a482fabcb902a5492cacf54be3fb2e9b (image=quay.io/ceph/ceph:v19, name=objective_feynman, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 15 10:43:41 compute-0 podman[108368]: 2025-12-15 10:43:41.034697105 +0000 UTC m=+0.133847411 container attach f798f90f5b5c5762636a8aba10f69149a482fabcb902a5492cacf54be3fb2e9b (image=quay.io/ceph/ceph:v19, name=objective_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 15 10:43:41 compute-0 objective_feynman[108384]: {
Dec 15 10:43:41 compute-0 objective_feynman[108384]:     "user_id": "glance",
Dec 15 10:43:41 compute-0 objective_feynman[108384]:     "display_name": "Glance S3 User",
Dec 15 10:43:41 compute-0 objective_feynman[108384]:     "email": "",
Dec 15 10:43:41 compute-0 objective_feynman[108384]:     "suspended": 0,
Dec 15 10:43:41 compute-0 objective_feynman[108384]:     "max_buckets": 1000,
Dec 15 10:43:41 compute-0 objective_feynman[108384]:     "subusers": [],
Dec 15 10:43:41 compute-0 objective_feynman[108384]:     "keys": [
Dec 15 10:43:41 compute-0 objective_feynman[108384]:         {
Dec 15 10:43:41 compute-0 objective_feynman[108384]:             "user": "glance",
Dec 15 10:43:41 compute-0 objective_feynman[108384]:             "access_key": "2MICXJ9SRNL70JFND8YR",
Dec 15 10:43:41 compute-0 objective_feynman[108384]:             "secret_key": "VI1s6bvaFCZSpY6L2REFdAddhyO8BOTUVlycw4Rn",
Dec 15 10:43:41 compute-0 objective_feynman[108384]:             "active": true,
Dec 15 10:43:41 compute-0 objective_feynman[108384]:             "create_date": "2025-12-15T10:43:40.472759Z"
Dec 15 10:43:41 compute-0 objective_feynman[108384]:         }
Dec 15 10:43:41 compute-0 objective_feynman[108384]:     ],
Dec 15 10:43:41 compute-0 objective_feynman[108384]:     "swift_keys": [],
Dec 15 10:43:41 compute-0 objective_feynman[108384]:     "caps": [],
Dec 15 10:43:41 compute-0 objective_feynman[108384]:     "op_mask": "read, write, delete",
Dec 15 10:43:41 compute-0 objective_feynman[108384]:     "default_placement": "",
Dec 15 10:43:41 compute-0 objective_feynman[108384]:     "default_storage_class": "",
Dec 15 10:43:41 compute-0 objective_feynman[108384]:     "placement_tags": [],
Dec 15 10:43:41 compute-0 objective_feynman[108384]:     "bucket_quota": {
Dec 15 10:43:41 compute-0 objective_feynman[108384]:         "enabled": false,
Dec 15 10:43:41 compute-0 objective_feynman[108384]:         "check_on_raw": false,
Dec 15 10:43:41 compute-0 objective_feynman[108384]:         "max_size": -1,
Dec 15 10:43:41 compute-0 objective_feynman[108384]:         "max_size_kb": 0,
Dec 15 10:43:41 compute-0 objective_feynman[108384]:         "max_objects": -1
Dec 15 10:43:41 compute-0 objective_feynman[108384]:     },
Dec 15 10:43:41 compute-0 objective_feynman[108384]:     "user_quota": {
Dec 15 10:43:41 compute-0 objective_feynman[108384]:         "enabled": false,
Dec 15 10:43:41 compute-0 objective_feynman[108384]:         "check_on_raw": false,
Dec 15 10:43:41 compute-0 objective_feynman[108384]:         "max_size": -1,
Dec 15 10:43:41 compute-0 objective_feynman[108384]:         "max_size_kb": 0,
Dec 15 10:43:41 compute-0 objective_feynman[108384]:         "max_objects": -1
Dec 15 10:43:41 compute-0 objective_feynman[108384]:     },
Dec 15 10:43:41 compute-0 objective_feynman[108384]:     "temp_url_keys": [],
Dec 15 10:43:41 compute-0 objective_feynman[108384]:     "type": "rgw",
Dec 15 10:43:41 compute-0 objective_feynman[108384]:     "mfa_ids": [],
Dec 15 10:43:41 compute-0 objective_feynman[108384]:     "account_id": "",
Dec 15 10:43:41 compute-0 objective_feynman[108384]:     "path": "/",
Dec 15 10:43:41 compute-0 objective_feynman[108384]:     "create_date": "2025-12-15T10:43:40.471319Z",
Dec 15 10:43:41 compute-0 objective_feynman[108384]:     "tags": [],
Dec 15 10:43:41 compute-0 objective_feynman[108384]:     "group_ids": []
Dec 15 10:43:41 compute-0 objective_feynman[108384]: }
Dec 15 10:43:41 compute-0 objective_feynman[108384]: 
Dec 15 10:43:41 compute-0 systemd[1]: libpod-f798f90f5b5c5762636a8aba10f69149a482fabcb902a5492cacf54be3fb2e9b.scope: Deactivated successfully.
Dec 15 10:43:41 compute-0 podman[108368]: 2025-12-15 10:43:41.242089412 +0000 UTC m=+0.341239728 container died f798f90f5b5c5762636a8aba10f69149a482fabcb902a5492cacf54be3fb2e9b (image=quay.io/ceph/ceph:v19, name=objective_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 15 10:43:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b20c9f624051d8560706cd481e55862e6d29a09df7d264111d9c214db603555-merged.mount: Deactivated successfully.
Dec 15 10:43:41 compute-0 podman[108368]: 2025-12-15 10:43:41.276304609 +0000 UTC m=+0.375454915 container remove f798f90f5b5c5762636a8aba10f69149a482fabcb902a5492cacf54be3fb2e9b (image=quay.io/ceph/ceph:v19, name=objective_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 15 10:43:41 compute-0 systemd[1]: libpod-conmon-f798f90f5b5c5762636a8aba10f69149a482fabcb902a5492cacf54be3fb2e9b.scope: Deactivated successfully.
Dec 15 10:43:41 compute-0 sudo[108365]: pam_unix(sudo:session): session closed for user root
Dec 15 10:43:41 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:41 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:42 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:43:42 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v158: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 15 10:43:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:42 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e00036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:42 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:42 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:42 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:42.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:42 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:42 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:42 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:42 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:42.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:43:42] "GET /metrics HTTP/1.1" 200 48233 "" "Prometheus/2.51.0"
Dec 15 10:43:42 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:43:42] "GET /metrics HTTP/1.1" 200 48233 "" "Prometheus/2.51.0"
Dec 15 10:43:43 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 15 10:43:43 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:43:43 compute-0 ceph-mon[74356]: pgmap v158: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 15 10:43:43 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:43:43 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:43 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8002c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:43 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:43 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 15 10:43:44 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v159: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 15 10:43:44 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:44 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:44 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:44 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:44 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:44.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:44 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:44 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e00036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:44 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:44 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:43:44 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:44.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:43:45 compute-0 ceph-mon[74356]: pgmap v159: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 15 10:43:45 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:45 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d00034b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:46 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 767 B/s wr, 5 op/s
Dec 15 10:43:46 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:46 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8002c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:46 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:46 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:46 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:46.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:46 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:46 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:46 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:46 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 15 10:43:46 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:46 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 15 10:43:46 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:46 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 15 10:43:46 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:46 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:46 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:46.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:47 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:43:47 compute-0 ceph-mon[74356]: pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 767 B/s wr, 5 op/s
Dec 15 10:43:47 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:47 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e00036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:48 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 767 B/s wr, 5 op/s
Dec 15 10:43:48 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:48 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d00034d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:48 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:48 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:43:48 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:48.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:43:48 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:48 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8002c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:48 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:48 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:43:48 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:48.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:43:49 compute-0 ceph-mon[74356]: pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 767 B/s wr, 5 op/s
Dec 15 10:43:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:49 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:49 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 15 10:43:50 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 1.1 KiB/s wr, 6 op/s
Dec 15 10:43:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:50 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e00036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:50 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:50 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:43:50 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:50.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:43:50 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:50 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4000f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:50 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:50 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:43:50 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:50.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:43:51 compute-0 ceph-mon[74356]: pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 1.1 KiB/s wr, 6 op/s
Dec 15 10:43:51 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:51 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:52 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:43:52 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 1.1 KiB/s wr, 5 op/s
Dec 15 10:43:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:52 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:52 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:52 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:43:52 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:52.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:43:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:52 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e00036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:52 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:52 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:52 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:52.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:52 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:43:52] "GET /metrics HTTP/1.1" 200 48243 "" "Prometheus/2.51.0"
Dec 15 10:43:52 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:43:52] "GET /metrics HTTP/1.1" 200 48243 "" "Prometheus/2.51.0"
Dec 15 10:43:53 compute-0 ceph-mon[74356]: pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 1.1 KiB/s wr, 5 op/s
Dec 15 10:43:53 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:53 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4000f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:54 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 1.1 KiB/s wr, 5 op/s
Dec 15 10:43:54 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:54 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8002cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:54 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:54 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:54 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:54.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:54 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:54 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:54 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:54 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:54 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:54.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:55 compute-0 sudo[108498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 15 10:43:55 compute-0 sudo[108498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:43:55 compute-0 sudo[108498]: pam_unix(sudo:session): session closed for user root
Dec 15 10:43:55 compute-0 ceph-mon[74356]: pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 1.1 KiB/s wr, 5 op/s
Dec 15 10:43:55 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:55 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e00036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:55 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-haproxy-nfs-cephfs-compute-0-ykblqa[95047]: [WARNING] 348/104355 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 15 10:43:56 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 1.2 KiB/s wr, 6 op/s
Dec 15 10:43:56 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:56 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4000f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:56 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:56 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:56 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:56.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:56 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:56 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8002cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:56 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:56 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:43:56 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:56.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:43:57 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:43:57 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:57 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:57 compute-0 ceph-mon[74356]: pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 1.2 KiB/s wr, 6 op/s
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [balancer INFO root] Optimize plan auto_2025-12-15_10:43:58
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [balancer INFO root] do_upmap
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [balancer INFO root] pools ['.rgw.root', 'images', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'vms', 'backups', 'default.rgw.control', '.nfs']
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Dec 15 10:43:58 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 15 10:43:58 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 15 10:43:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:58 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:43:58 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:43:58 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:58 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:43:58 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:43:58.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:43:58 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:58 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4002550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:58 compute-0 sudo[108528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:43:58 compute-0 sudo[108528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:43:58 compute-0 sudo[108528]: pam_unix(sudo:session): session closed for user root
Dec 15 10:43:58 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:43:58 compute-0 sudo[108553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 15 10:43:58 compute-0 sudo[108553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:43:58 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:43:58 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:43:58 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:43:58.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:43:59 compute-0 sudo[108553]: pam_unix(sudo:session): session closed for user root
Dec 15 10:43:59 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:43:59 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:43:59 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 15 10:43:59 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 15 10:43:59 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 15 10:43:59 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:43:59 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc001fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:43:59 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:43:59 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 15 10:43:59 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:44:00 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 15 10:44:00 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 15 10:44:00 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 15 10:44:00 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 15 10:44:00 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 15 10:44:00 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:44:00 compute-0 ceph-mon[74356]: pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Dec 15 10:44:00 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:44:00 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 15 10:44:00 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:44:00 compute-0 sudo[108610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:44:00 compute-0 sudo[108610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:44:00 compute-0 sudo[108610]: pam_unix(sudo:session): session closed for user root
Dec 15 10:44:00 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Dec 15 10:44:00 compute-0 sudo[108635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 77365f67-614e-5a8d-b658-640395550c79 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 15 10:44:00 compute-0 sudo[108635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:44:00 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:00 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8002cf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:00 compute-0 podman[108701]: 2025-12-15 10:44:00.564755132 +0000 UTC m=+0.020489170 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:44:00 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:00 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:44:00 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:44:00.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:44:00 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:00 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:00 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:00 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:00 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:44:00.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:00 compute-0 podman[108701]: 2025-12-15 10:44:00.90714486 +0000 UTC m=+0.362878878 container create b1d739ea23f95647a4a4bac30b6caf7f89ef08c0f4c1313f3613ce17883e70d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 15 10:44:01 compute-0 systemd[1]: Started libpod-conmon-b1d739ea23f95647a4a4bac30b6caf7f89ef08c0f4c1313f3613ce17883e70d4.scope.
Dec 15 10:44:01 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:44:01 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:44:01 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 15 10:44:01 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 15 10:44:01 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 15 10:44:01 compute-0 ceph-mon[74356]: pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Dec 15 10:44:01 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:01 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4002550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:01 compute-0 podman[108701]: 2025-12-15 10:44:01.472853419 +0000 UTC m=+0.928587467 container init b1d739ea23f95647a4a4bac30b6caf7f89ef08c0f4c1313f3613ce17883e70d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 15 10:44:01 compute-0 podman[108701]: 2025-12-15 10:44:01.479538827 +0000 UTC m=+0.935272865 container start b1d739ea23f95647a4a4bac30b6caf7f89ef08c0f4c1313f3613ce17883e70d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:44:01 compute-0 eager_shirley[108719]: 167 167
Dec 15 10:44:01 compute-0 systemd[1]: libpod-b1d739ea23f95647a4a4bac30b6caf7f89ef08c0f4c1313f3613ce17883e70d4.scope: Deactivated successfully.
Dec 15 10:44:01 compute-0 podman[108701]: 2025-12-15 10:44:01.830099361 +0000 UTC m=+1.285833409 container attach b1d739ea23f95647a4a4bac30b6caf7f89ef08c0f4c1313f3613ce17883e70d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:44:01 compute-0 podman[108701]: 2025-12-15 10:44:01.831090184 +0000 UTC m=+1.286824202 container died b1d739ea23f95647a4a4bac30b6caf7f89ef08c0f4c1313f3613ce17883e70d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_shirley, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:44:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-703fb20fcd56a69b77c041abc5aa689676a1dbe8b9193d87d5da0c0f5efbd68a-merged.mount: Deactivated successfully.
Dec 15 10:44:01 compute-0 podman[108701]: 2025-12-15 10:44:01.88976596 +0000 UTC m=+1.345499998 container remove b1d739ea23f95647a4a4bac30b6caf7f89ef08c0f4c1313f3613ce17883e70d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_shirley, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:44:01 compute-0 systemd[1]: libpod-conmon-b1d739ea23f95647a4a4bac30b6caf7f89ef08c0f4c1313f3613ce17883e70d4.scope: Deactivated successfully.
Dec 15 10:44:02 compute-0 podman[108745]: 2025-12-15 10:44:02.029774061 +0000 UTC m=+0.025685410 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:44:02 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:44:02 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec 15 10:44:02 compute-0 podman[108745]: 2025-12-15 10:44:02.212440454 +0000 UTC m=+0.208351823 container create 0f69587abb751dd092645dc0ad2d6e48a9b49746e921b5dfa3ef8eecb348097b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_shaw, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 15 10:44:02 compute-0 systemd[1]: Started libpod-conmon-0f69587abb751dd092645dc0ad2d6e48a9b49746e921b5dfa3ef8eecb348097b.scope.
Dec 15 10:44:02 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:44:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58374951eb6b71bce84e4eec9aab6a39775fddafd4cdaaf6f94038c7002525fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:44:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58374951eb6b71bce84e4eec9aab6a39775fddafd4cdaaf6f94038c7002525fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:44:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58374951eb6b71bce84e4eec9aab6a39775fddafd4cdaaf6f94038c7002525fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:44:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58374951eb6b71bce84e4eec9aab6a39775fddafd4cdaaf6f94038c7002525fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:44:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58374951eb6b71bce84e4eec9aab6a39775fddafd4cdaaf6f94038c7002525fd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 15 10:44:02 compute-0 podman[108745]: 2025-12-15 10:44:02.317763323 +0000 UTC m=+0.313674652 container init 0f69587abb751dd092645dc0ad2d6e48a9b49746e921b5dfa3ef8eecb348097b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 15 10:44:02 compute-0 podman[108745]: 2025-12-15 10:44:02.325564157 +0000 UTC m=+0.321475496 container start 0f69587abb751dd092645dc0ad2d6e48a9b49746e921b5dfa3ef8eecb348097b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:44:02 compute-0 podman[108745]: 2025-12-15 10:44:02.32901037 +0000 UTC m=+0.324921769 container attach 0f69587abb751dd092645dc0ad2d6e48a9b49746e921b5dfa3ef8eecb348097b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_shaw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:44:02 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:02 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc001fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:02 compute-0 epic_shaw[108760]: --> passed data devices: 0 physical, 1 LVM
Dec 15 10:44:02 compute-0 epic_shaw[108760]: --> All data devices are unavailable
Dec 15 10:44:02 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:02 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:44:02 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:44:02.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:44:02 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:02 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8002cf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:02 compute-0 systemd[1]: libpod-0f69587abb751dd092645dc0ad2d6e48a9b49746e921b5dfa3ef8eecb348097b.scope: Deactivated successfully.
Dec 15 10:44:02 compute-0 conmon[108760]: conmon 0f69587abb751dd09264 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0f69587abb751dd092645dc0ad2d6e48a9b49746e921b5dfa3ef8eecb348097b.scope/container/memory.events
Dec 15 10:44:02 compute-0 podman[108745]: 2025-12-15 10:44:02.708744547 +0000 UTC m=+0.704655876 container died 0f69587abb751dd092645dc0ad2d6e48a9b49746e921b5dfa3ef8eecb348097b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_shaw, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 15 10:44:02 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:02 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:02 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:44:02.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:02 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:44:02] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Dec 15 10:44:02 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:44:02] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Dec 15 10:44:03 compute-0 ceph-mon[74356]: pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec 15 10:44:03 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:03 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-58374951eb6b71bce84e4eec9aab6a39775fddafd4cdaaf6f94038c7002525fd-merged.mount: Deactivated successfully.
Dec 15 10:44:04 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v169: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec 15 10:44:04 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:04 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4002550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:04 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:04 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:04 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:44:04.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:04 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:04 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc001fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:04 compute-0 podman[108745]: 2025-12-15 10:44:04.767624853 +0000 UTC m=+2.763536192 container remove 0f69587abb751dd092645dc0ad2d6e48a9b49746e921b5dfa3ef8eecb348097b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:44:04 compute-0 systemd[1]: libpod-conmon-0f69587abb751dd092645dc0ad2d6e48a9b49746e921b5dfa3ef8eecb348097b.scope: Deactivated successfully.
Dec 15 10:44:04 compute-0 sudo[108635]: pam_unix(sudo:session): session closed for user root
Dec 15 10:44:04 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:04 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:04 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:44:04.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:04 compute-0 sudo[108790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:44:04 compute-0 sudo[108790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:44:04 compute-0 sudo[108790]: pam_unix(sudo:session): session closed for user root
Dec 15 10:44:04 compute-0 sudo[108815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 77365f67-614e-5a8d-b658-640395550c79 -- lvm list --format json
Dec 15 10:44:04 compute-0 sudo[108815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:44:05 compute-0 ceph-mon[74356]: pgmap v169: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec 15 10:44:05 compute-0 podman[108881]: 2025-12-15 10:44:05.383729217 +0000 UTC m=+0.050294273 container create 9f8646fa1a4fe900c29acef3bee772c6c02d915bb86146567927b3cae322cbbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 15 10:44:05 compute-0 systemd[1]: Started libpod-conmon-9f8646fa1a4fe900c29acef3bee772c6c02d915bb86146567927b3cae322cbbe.scope.
Dec 15 10:44:05 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:44:05 compute-0 podman[108881]: 2025-12-15 10:44:05.450374082 +0000 UTC m=+0.116939128 container init 9f8646fa1a4fe900c29acef3bee772c6c02d915bb86146567927b3cae322cbbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_wilson, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 15 10:44:05 compute-0 podman[108881]: 2025-12-15 10:44:05.360156337 +0000 UTC m=+0.026721433 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:44:05 compute-0 podman[108881]: 2025-12-15 10:44:05.457294329 +0000 UTC m=+0.123859385 container start 9f8646fa1a4fe900c29acef3bee772c6c02d915bb86146567927b3cae322cbbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_wilson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 15 10:44:05 compute-0 focused_wilson[108897]: 167 167
Dec 15 10:44:05 compute-0 systemd[1]: libpod-9f8646fa1a4fe900c29acef3bee772c6c02d915bb86146567927b3cae322cbbe.scope: Deactivated successfully.
Dec 15 10:44:05 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:05 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8002d10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:05 compute-0 podman[108881]: 2025-12-15 10:44:05.479591657 +0000 UTC m=+0.146156743 container attach 9f8646fa1a4fe900c29acef3bee772c6c02d915bb86146567927b3cae322cbbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_wilson, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 15 10:44:05 compute-0 podman[108881]: 2025-12-15 10:44:05.480847677 +0000 UTC m=+0.147412733 container died 9f8646fa1a4fe900c29acef3bee772c6c02d915bb86146567927b3cae322cbbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:44:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1e79bea72103e92c7d0a70d0c5ccb095212ef0f95f62498e6e958491b0cd364-merged.mount: Deactivated successfully.
Dec 15 10:44:05 compute-0 podman[108881]: 2025-12-15 10:44:05.526164847 +0000 UTC m=+0.192729903 container remove 9f8646fa1a4fe900c29acef3bee772c6c02d915bb86146567927b3cae322cbbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 15 10:44:05 compute-0 systemd[1]: libpod-conmon-9f8646fa1a4fe900c29acef3bee772c6c02d915bb86146567927b3cae322cbbe.scope: Deactivated successfully.
Dec 15 10:44:05 compute-0 podman[108921]: 2025-12-15 10:44:05.657053799 +0000 UTC m=+0.023434305 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:44:05 compute-0 podman[108921]: 2025-12-15 10:44:05.790550377 +0000 UTC m=+0.156930863 container create 5ab5be21e28088a20888f8ee49e7f733b7a8220d0082a46d0d9d15605950853f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_jennings, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 15 10:44:06 compute-0 systemd[1]: Started libpod-conmon-5ab5be21e28088a20888f8ee49e7f733b7a8220d0082a46d0d9d15605950853f.scope.
Dec 15 10:44:06 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/787f7efb5744df0a3dc78de52025407b43d585e0851cb4be2894052c758fb3b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/787f7efb5744df0a3dc78de52025407b43d585e0851cb4be2894052c758fb3b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/787f7efb5744df0a3dc78de52025407b43d585e0851cb4be2894052c758fb3b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/787f7efb5744df0a3dc78de52025407b43d585e0851cb4be2894052c758fb3b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:44:06 compute-0 podman[108921]: 2025-12-15 10:44:06.081942771 +0000 UTC m=+0.448323287 container init 5ab5be21e28088a20888f8ee49e7f733b7a8220d0082a46d0d9d15605950853f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 15 10:44:06 compute-0 podman[108921]: 2025-12-15 10:44:06.089471007 +0000 UTC m=+0.455851493 container start 5ab5be21e28088a20888f8ee49e7f733b7a8220d0082a46d0d9d15605950853f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 15 10:44:06 compute-0 podman[108921]: 2025-12-15 10:44:06.096351451 +0000 UTC m=+0.462731957 container attach 5ab5be21e28088a20888f8ee49e7f733b7a8220d0082a46d0d9d15605950853f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_jennings, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:44:06 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v170: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 15 10:44:06 compute-0 interesting_jennings[108938]: {
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:     "0": [
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:         {
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:             "devices": [
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:                 "/dev/loop3"
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:             ],
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:             "lv_name": "ceph_lv0",
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:             "lv_size": "21470642176",
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MCSOGR-6V69-UVGq-8FpK-DBjb-LyDE-FLUPxJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=77365f67-614e-5a8d-b658-640395550c79,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7690eca0-4e87-4157-a045-1912448da925,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:             "lv_uuid": "MCSOGR-6V69-UVGq-8FpK-DBjb-LyDE-FLUPxJ",
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:             "name": "ceph_lv0",
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:             "tags": {
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:                 "ceph.block_uuid": "MCSOGR-6V69-UVGq-8FpK-DBjb-LyDE-FLUPxJ",
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:                 "ceph.cephx_lockbox_secret": "",
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:                 "ceph.cluster_fsid": "77365f67-614e-5a8d-b658-640395550c79",
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:                 "ceph.cluster_name": "ceph",
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:                 "ceph.crush_device_class": "",
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:                 "ceph.encrypted": "0",
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:                 "ceph.osd_fsid": "7690eca0-4e87-4157-a045-1912448da925",
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:                 "ceph.osd_id": "0",
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:                 "ceph.type": "block",
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:                 "ceph.vdo": "0",
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:                 "ceph.with_tpm": "0"
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:             },
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:             "type": "block",
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:             "vg_name": "ceph_vg0"
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:         }
Dec 15 10:44:06 compute-0 interesting_jennings[108938]:     ]
Dec 15 10:44:06 compute-0 interesting_jennings[108938]: }
Dec 15 10:44:06 compute-0 systemd[1]: libpod-5ab5be21e28088a20888f8ee49e7f733b7a8220d0082a46d0d9d15605950853f.scope: Deactivated successfully.
Dec 15 10:44:06 compute-0 podman[108921]: 2025-12-15 10:44:06.405697891 +0000 UTC m=+0.772078377 container died 5ab5be21e28088a20888f8ee49e7f733b7a8220d0082a46d0d9d15605950853f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_jennings, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 15 10:44:06 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:06 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:06 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:06 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:06 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:44:06.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-787f7efb5744df0a3dc78de52025407b43d585e0851cb4be2894052c758fb3b8-merged.mount: Deactivated successfully.
Dec 15 10:44:06 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:06 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4002550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:06 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:06 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:06 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:44:06.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:07 compute-0 podman[108921]: 2025-12-15 10:44:07.039488841 +0000 UTC m=+1.405869327 container remove 5ab5be21e28088a20888f8ee49e7f733b7a8220d0082a46d0d9d15605950853f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_jennings, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 15 10:44:07 compute-0 systemd[1]: libpod-conmon-5ab5be21e28088a20888f8ee49e7f733b7a8220d0082a46d0d9d15605950853f.scope: Deactivated successfully.
Dec 15 10:44:07 compute-0 sudo[108815]: pam_unix(sudo:session): session closed for user root
Dec 15 10:44:07 compute-0 sudo[108960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 15 10:44:07 compute-0 sudo[108960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:44:07 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:44:07 compute-0 sudo[108960]: pam_unix(sudo:session): session closed for user root
Dec 15 10:44:07 compute-0 sudo[108985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/77365f67-614e-5a8d-b658-640395550c79/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 77365f67-614e-5a8d-b658-640395550c79 -- raw list --format json
Dec 15 10:44:07 compute-0 sudo[108985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:44:07 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:07 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc003530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:07 compute-0 podman[109050]: 2025-12-15 10:44:07.626241768 +0000 UTC m=+0.022621520 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:44:08 compute-0 podman[109050]: 2025-12-15 10:44:08.097840533 +0000 UTC m=+0.494220265 container create 65bf6c153ea4d448a85bc7e8472506cb1be5efc05196c00cda8a9c4f2c649203 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hypatia, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 15 10:44:08 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v171: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:08 compute-0 ceph-mon[74356]: pgmap v170: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 15 10:44:08 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:08 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8002d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:08 compute-0 systemd[1]: Started libpod-conmon-65bf6c153ea4d448a85bc7e8472506cb1be5efc05196c00cda8a9c4f2c649203.scope.
Dec 15 10:44:08 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:44:08 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:08 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:44:08 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:44:08.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:44:08 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:08 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:08 compute-0 podman[109050]: 2025-12-15 10:44:08.740463154 +0000 UTC m=+1.136842916 container init 65bf6c153ea4d448a85bc7e8472506cb1be5efc05196c00cda8a9c4f2c649203 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hypatia, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 15 10:44:08 compute-0 podman[109050]: 2025-12-15 10:44:08.750852223 +0000 UTC m=+1.147231955 container start 65bf6c153ea4d448a85bc7e8472506cb1be5efc05196c00cda8a9c4f2c649203 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hypatia, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 15 10:44:08 compute-0 laughing_hypatia[109068]: 167 167
Dec 15 10:44:08 compute-0 systemd[1]: libpod-65bf6c153ea4d448a85bc7e8472506cb1be5efc05196c00cda8a9c4f2c649203.scope: Deactivated successfully.
Dec 15 10:44:08 compute-0 podman[109050]: 2025-12-15 10:44:08.76884756 +0000 UTC m=+1.165227292 container attach 65bf6c153ea4d448a85bc7e8472506cb1be5efc05196c00cda8a9c4f2c649203 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 15 10:44:08 compute-0 podman[109050]: 2025-12-15 10:44:08.771041612 +0000 UTC m=+1.167421584 container died 65bf6c153ea4d448a85bc7e8472506cb1be5efc05196c00cda8a9c4f2c649203 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 15 10:44:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-44e190d543bc57bf47846736626783a80895d815b85eaa7c7ed54f35005a4f49-merged.mount: Deactivated successfully.
Dec 15 10:44:08 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:08 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 15 10:44:08 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:44:08.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 15 10:44:08 compute-0 podman[109050]: 2025-12-15 10:44:08.890567834 +0000 UTC m=+1.286947566 container remove 65bf6c153ea4d448a85bc7e8472506cb1be5efc05196c00cda8a9c4f2c649203 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hypatia, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 15 10:44:08 compute-0 systemd[1]: libpod-conmon-65bf6c153ea4d448a85bc7e8472506cb1be5efc05196c00cda8a9c4f2c649203.scope: Deactivated successfully.
Dec 15 10:44:09 compute-0 podman[109094]: 2025-12-15 10:44:09.056684117 +0000 UTC m=+0.057982214 container create 2f114b4696808256e133789f60f90f1fc0eb7dbd3001f0f90290ff4546107e73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kapitsa, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 15 10:44:09 compute-0 systemd[1]: Started libpod-conmon-2f114b4696808256e133789f60f90f1fc0eb7dbd3001f0f90290ff4546107e73.scope.
Dec 15 10:44:09 compute-0 systemd[1]: Started libcrun container.
Dec 15 10:44:09 compute-0 podman[109094]: 2025-12-15 10:44:09.021312022 +0000 UTC m=+0.022610139 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 15 10:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2fe9465c1e9c92c4a4e3f9b6130de017619d21cbc9920c16fc44ba5cdc166b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 15 10:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2fe9465c1e9c92c4a4e3f9b6130de017619d21cbc9920c16fc44ba5cdc166b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 15 10:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2fe9465c1e9c92c4a4e3f9b6130de017619d21cbc9920c16fc44ba5cdc166b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 15 10:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2fe9465c1e9c92c4a4e3f9b6130de017619d21cbc9920c16fc44ba5cdc166b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 15 10:44:09 compute-0 podman[109094]: 2025-12-15 10:44:09.127319523 +0000 UTC m=+0.128617640 container init 2f114b4696808256e133789f60f90f1fc0eb7dbd3001f0f90290ff4546107e73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 15 10:44:09 compute-0 podman[109094]: 2025-12-15 10:44:09.147107059 +0000 UTC m=+0.148405156 container start 2f114b4696808256e133789f60f90f1fc0eb7dbd3001f0f90290ff4546107e73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kapitsa, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 15 10:44:09 compute-0 podman[109094]: 2025-12-15 10:44:09.186571747 +0000 UTC m=+0.187869864 container attach 2f114b4696808256e133789f60f90f1fc0eb7dbd3001f0f90290ff4546107e73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kapitsa, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 15 10:44:09 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:09 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4002550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:09 compute-0 ceph-mon[74356]: pgmap v171: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:09 compute-0 lvm[109185]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 15 10:44:09 compute-0 lvm[109185]: VG ceph_vg0 finished
Dec 15 10:44:09 compute-0 friendly_kapitsa[109110]: {}
Dec 15 10:44:09 compute-0 systemd[1]: libpod-2f114b4696808256e133789f60f90f1fc0eb7dbd3001f0f90290ff4546107e73.scope: Deactivated successfully.
Dec 15 10:44:09 compute-0 podman[109094]: 2025-12-15 10:44:09.83816918 +0000 UTC m=+0.839467307 container died 2f114b4696808256e133789f60f90f1fc0eb7dbd3001f0f90290ff4546107e73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kapitsa, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325)
Dec 15 10:44:09 compute-0 systemd[1]: libpod-2f114b4696808256e133789f60f90f1fc0eb7dbd3001f0f90290ff4546107e73.scope: Consumed 1.073s CPU time.
Dec 15 10:44:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c2fe9465c1e9c92c4a4e3f9b6130de017619d21cbc9920c16fc44ba5cdc166b-merged.mount: Deactivated successfully.
Dec 15 10:44:09 compute-0 podman[109094]: 2025-12-15 10:44:09.904058531 +0000 UTC m=+0.905356628 container remove 2f114b4696808256e133789f60f90f1fc0eb7dbd3001f0f90290ff4546107e73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 15 10:44:09 compute-0 systemd[1]: libpod-conmon-2f114b4696808256e133789f60f90f1fc0eb7dbd3001f0f90290ff4546107e73.scope: Deactivated successfully.
Dec 15 10:44:09 compute-0 sudo[108985]: pam_unix(sudo:session): session closed for user root
Dec 15 10:44:09 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 15 10:44:09 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:44:09 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 15 10:44:09 compute-0 ceph-mon[74356]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:44:10 compute-0 sudo[109203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 15 10:44:10 compute-0 sudo[109203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:44:10 compute-0 sudo[109203]: pam_unix(sudo:session): session closed for user root
Dec 15 10:44:10 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v172: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:10 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:10 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc003530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:10 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:10 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:10 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:44:10.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:10 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:10 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8002d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:10 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:10 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:10 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:44:10.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:10 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:44:10 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' 
Dec 15 10:44:10 compute-0 ceph-mon[74356]: pgmap v172: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:11 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:11 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:12 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:44:12 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v173: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:12 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:12 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4002550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:12 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:12 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:44:12 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:44:12.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:44:12 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:12 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc003530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:12 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:44:12] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Dec 15 10:44:12 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:44:12] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Dec 15 10:44:12 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:12 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:12 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:44:12.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:13 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 15 10:44:13 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:44:13 compute-0 ceph-mon[74356]: pgmap v173: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:13 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:44:13 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:13 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc003530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:14 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v174: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:14 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:14 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:14 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:14 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:44:14 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:44:14.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:44:14 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:14 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4002550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:14 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:14 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:14 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:44:14.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:15 compute-0 ceph-mon[74356]: pgmap v174: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:15 compute-0 sudo[109234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 15 10:44:15 compute-0 sudo[109234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:44:15 compute-0 sudo[109234]: pam_unix(sudo:session): session closed for user root
Dec 15 10:44:15 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:15 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8002d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:16 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v175: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 15 10:44:16 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:16 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc003530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:16 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:16 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:16 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:44:16.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:16 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:16 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f4009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:16 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:16 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:16 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:44:16.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:17 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:44:17 compute-0 ceph-mon[74356]: pgmap v175: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 15 10:44:17 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:17 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4002550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:18 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v176: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:18 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4002550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:18 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:18 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:18 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:44:18.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:18 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:18 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc003530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:18 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:18 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:18 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:44:18.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:19 compute-0 ceph-mon[74356]: pgmap v176: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:19 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:19 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f400abd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:20 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v177: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:20 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:20 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8002d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:20 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:20 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:44:20 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:44:20.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:44:20 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:20 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4002550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:20 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:20 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:20 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:44:20.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:21 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:21 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:21 compute-0 ceph-mon[74356]: pgmap v177: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:22 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:44:22 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v178: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:22 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f400abd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:22 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:22 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:44:22 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:44:22.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:44:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:22 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8002d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:22 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:44:22] "GET /metrics HTTP/1.1" 200 48276 "" "Prometheus/2.51.0"
Dec 15 10:44:22 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:44:22] "GET /metrics HTTP/1.1" 200 48276 "" "Prometheus/2.51.0"
Dec 15 10:44:22 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:22 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:22 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:44:22.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:23 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:23 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4002550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:23 compute-0 ceph-mon[74356]: pgmap v178: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:24 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v179: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:24 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:24 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:24 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:24 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:24 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:44:24.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:24 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:24 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f400abd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:24 compute-0 ceph-mon[74356]: pgmap v179: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:24 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:24 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:24 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:44:24.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:25 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:25 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8002d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:26 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v180: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 15 10:44:26 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:26 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4002550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:26 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:26 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:26 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:44:26.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:26 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:26 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:26 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:26 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:26 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:44:26.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:27 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:44:27 compute-0 ceph-mon[74356]: pgmap v180: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 15 10:44:27 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:27 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f400abd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:28 compute-0 ceph-mon[74356]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 15 10:44:28 compute-0 ceph-mon[74356]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:44:28 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v181: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:44:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:44:28 compute-0 ceph-mon[74356]: from='mgr.14514 192.168.122.100:0/3117075778' entity='mgr.compute-0.difmqj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 15 10:44:28 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:28 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:44:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:44:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 15 10:44:28 compute-0 ceph-mgr[74651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 15 10:44:28 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:28 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:28 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:44:28.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:28 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:28 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4002550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:28 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:28 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:28 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:44:28.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:29 compute-0 sshd-session[109273]: Accepted publickey for zuul from 192.168.122.10 port 35926 ssh2: ECDSA SHA256:RI7NGykFU6DY8VmZwamQwcvn1msDfj+uaMVMpAHHUfo
Dec 15 10:44:29 compute-0 systemd-logind[797]: New session 38 of user zuul.
Dec 15 10:44:29 compute-0 systemd[1]: Started Session 38 of User zuul.
Dec 15 10:44:29 compute-0 sshd-session[109273]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 15 10:44:29 compute-0 sudo[109278]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 15 10:44:29 compute-0 sudo[109278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 15 10:44:29 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:29 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:29 compute-0 ceph-mon[74356]: pgmap v181: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:30 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v182: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:30 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:30 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f400abd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:30 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:30 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:44:30 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:44:30.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:44:30 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:30 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:30 compute-0 ceph-mon[74356]: pgmap v182: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:30 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:30 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:30 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:44:30.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:31 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:31 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:32 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:44:32 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v183: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:32 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:32 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:32 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:32 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:32 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:44:32.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:32 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:32 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f400abd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:32 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-mgr-compute-0-difmqj[74647]: ::ffff:192.168.122.100 - - [15/Dec/2025:10:44:32] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Dec 15 10:44:32 compute-0 ceph-mgr[74651]: [prometheus INFO cherrypy.access.140425765283392] ::ffff:192.168.122.100 - - [15/Dec/2025:10:44:32] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Dec 15 10:44:32 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:32 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:32 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:44:32.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:33 compute-0 ceph-mon[74356]: pgmap v183: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:33 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:33 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:34 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v184: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:34 compute-0 ovs-vsctl[109481]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 15 10:44:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:34 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:34 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:34 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:34 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:44:34.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:34 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:34 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0004480 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:34 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:34 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:34 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:44:34.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:35 compute-0 sudo[109727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 15 10:44:35 compute-0 sudo[109727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 15 10:44:35 compute-0 sudo[109727]: pam_unix(sudo:session): session closed for user root
Dec 15 10:44:35 compute-0 ceph-mon[74356]: pgmap v184: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:35 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:35 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f400abd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:35 compute-0 lvm[109854]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 15 10:44:35 compute-0 lvm[109854]: VG ceph_vg0 finished
Dec 15 10:44:36 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v185: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 15 10:44:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:36 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:36 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:36 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:36 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:44:36.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:36 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:36 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:36 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:36 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:44:36 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:44:36.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:44:37 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:44:37 compute-0 crontab[110300]: (root) LIST (root)
Dec 15 10:44:37 compute-0 ceph-mon[74356]: pgmap v185: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 15 10:44:37 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:37 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0004480 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:38 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v186: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:38 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f400abf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:38 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:38 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:44:38 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:44:38.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:44:38 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:38 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:38 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:38 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:38 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:44:38.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:39 compute-0 ceph-mon[74356]: pgmap v186: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:39 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:39 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8004cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:39 compute-0 systemd[1]: Starting Hostname Service...
Dec 15 10:44:39 compute-0 systemd[1]: Started Hostname Service.
Dec 15 10:44:40 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v187: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:40 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d0004480 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:40 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:40 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 15 10:44:40 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.100 - anonymous [15/Dec/2025:10:44:40.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 15 10:44:40 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:40 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f400ac10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:40 compute-0 radosgw[93194]: ====== starting new request req=0x7f1a9ce235d0 =====
Dec 15 10:44:40 compute-0 radosgw[93194]: ====== req done req=0x7f1a9ce235d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 15 10:44:40 compute-0 radosgw[93194]: beast: 0x7f1a9ce235d0: 192.168.122.102 - anonymous [15/Dec/2025:10:44:40.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 15 10:44:41 compute-0 ceph-mon[74356]: pgmap v187: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:41 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:41 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5e0001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 15 10:44:42 compute-0 ceph-mgr[74651]: log_channel(cluster) log [DBG] : pgmap v188: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 15 10:44:42 compute-0 ceph-mon[74356]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:44:42.376865) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795482376922, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1607, "num_deletes": 254, "total_data_size": 3119798, "memory_usage": 3165624, "flush_reason": "Manual Compaction"}
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795482389377, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 1800042, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10743, "largest_seqno": 12349, "table_properties": {"data_size": 1794665, "index_size": 2581, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13522, "raw_average_key_size": 20, "raw_value_size": 1782988, "raw_average_value_size": 2637, "num_data_blocks": 115, "num_entries": 676, "num_filter_entries": 676, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765795316, "oldest_key_time": 1765795316, "file_creation_time": 1765795482, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d36e3d93-cef6-4482-9c71-0054ae87e0c9", "db_session_id": "8WPMBXYVT9DSSQWRN3T3", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 12567 microseconds, and 5532 cpu microseconds.
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:44:42.389433) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 1800042 bytes OK
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:44:42.389484) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:44:42.390634) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:44:42.390651) EVENT_LOG_v1 {"time_micros": 1765795482390645, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:44:42.390672) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3113131, prev total WAL file size 3113131, number of live WAL files 2.
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:44:42.391584) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323535' seq:0, type:0; will stop at (end)
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(1757KB)], [26(13MB)]
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795482391617, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 15727705, "oldest_snapshot_seqno": -1}
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4244 keys, 13732290 bytes, temperature: kUnknown
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795482474493, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 13732290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13700227, "index_size": 20369, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 107408, "raw_average_key_size": 25, "raw_value_size": 13619087, "raw_average_value_size": 3209, "num_data_blocks": 872, "num_entries": 4244, "num_filter_entries": 4244, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765794889, "oldest_key_time": 0, "file_creation_time": 1765795482, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d36e3d93-cef6-4482-9c71-0054ae87e0c9", "db_session_id": "8WPMBXYVT9DSSQWRN3T3", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:44:42.474740) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 13732290 bytes
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:44:42.477208) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.6 rd, 165.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 13.3 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(16.4) write-amplify(7.6) OK, records in: 4689, records dropped: 445 output_compression: NoCompression
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:44:42.477225) EVENT_LOG_v1 {"time_micros": 1765795482477216, "job": 10, "event": "compaction_finished", "compaction_time_micros": 82940, "compaction_time_cpu_micros": 32474, "output_level": 6, "num_output_files": 1, "total_output_size": 13732290, "num_input_records": 4689, "num_output_records": 4244, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795482477570, "job": 10, "event": "table_file_deletion", "file_number": 28}
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765795482479714, "job": 10, "event": "table_file_deletion", "file_number": 26}
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:44:42.391487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:44:42.479772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:44:42.479778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:44:42.479779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:44:42.479780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 15 10:44:42 compute-0 ceph-mon[74356]: rocksdb: (Original Log Time 2025/12/15-10:44:42.479782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 15 10:44:42 compute-0 ceph-77365f67-614e-5a8d-b658-640395550c79-nfs-cephfs-2-0-compute-0-stewbo[94553]: 15/12/2025 10:44:42 : epoch 693fe539 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5f400ac10 fd 48 proxy header rest len failed header rlen = % (will set dead)
